Feb 26 14:13:40 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 14:13:40 crc restorecon[4703]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:40 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 14:13:41 crc restorecon[4703]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 14:13:42 crc kubenswrapper[4809]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.012506 4809 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019258 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019289 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019295 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019301 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019306 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019311 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019316 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019320 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019324 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019329 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019334 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019339 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019344 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019349 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019355 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019362 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019366 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019370 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019374 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019379 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019384 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019388 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019392 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019399 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019405 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019409 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019415 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019420 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019424 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019429 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019434 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019439 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019443 4809 feature_gate.go:330] unrecognized feature gate: Example Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019448 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019453 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019468 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019475 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019480 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019486 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019491 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019497 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019502 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019507 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019511 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019516 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019521 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019527 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019533 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019538 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019543 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019547 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019553 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019559 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019564 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019569 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019573 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019577 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019581 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019585 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019590 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019593 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019597 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019600 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019604 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019608 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019613 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019617 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019622 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019626 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019630 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.019634 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019739 4809 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019749 4809 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019761 4809 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019766 4809 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019772 4809 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019777 4809 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019783 4809 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019790 4809 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019794 4809 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019798 4809 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019803 4809 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019807 4809 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019812 4809 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019817 4809 flags.go:64] FLAG: --cgroup-root="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019822 4809 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019828 4809 flags.go:64] FLAG: --client-ca-file="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019833 4809 flags.go:64] FLAG: --cloud-config="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019838 4809 flags.go:64] FLAG: --cloud-provider="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019843 4809 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019854 4809 flags.go:64] FLAG: --cluster-domain="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019859 4809 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019864 4809 flags.go:64] FLAG: --config-dir="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019869 4809 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019875 4809 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019883 4809 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019889 4809 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019894 4809 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019900 4809 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019906 4809 flags.go:64] FLAG: --contention-profiling="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019911 4809 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019916 4809 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019922 4809 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019928 4809 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019935 4809 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019941 4809 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019946 4809 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019951 4809 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019956 4809 flags.go:64] FLAG: --enable-server="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019961 4809 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019971 4809 flags.go:64] FLAG: --event-burst="100" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019977 4809 flags.go:64] FLAG: --event-qps="50" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019982 4809 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019993 4809 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.019998 4809 flags.go:64] FLAG: --eviction-hard="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020005 4809 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020029 4809 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020035 4809 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020040 4809 flags.go:64] FLAG: --eviction-soft="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020046 4809 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020051 4809 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020056 4809 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020061 4809 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020066 4809 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020072 4809 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020077 4809 flags.go:64] FLAG: --feature-gates="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020084 4809 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020090 4809 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020095 4809 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020100 4809 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020104 4809 flags.go:64] FLAG: --healthz-port="10248" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020108 4809 flags.go:64] FLAG: --help="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020112 4809 flags.go:64] FLAG: --hostname-override="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020116 4809 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020120 4809 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020124 4809 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020128 4809 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020132 4809 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020137 4809 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020142 4809 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020147 4809 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020152 4809 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020158 4809 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020164 4809 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020170 4809 flags.go:64] FLAG: --kube-reserved="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020175 4809 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020180 4809 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020186 4809 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020191 4809 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020202 4809 flags.go:64] FLAG: --lock-file="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020207 4809 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020213 4809 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020218 4809 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020249 4809 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020255 4809 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020260 4809 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020265 4809 flags.go:64] FLAG: --logging-format="text" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020271 4809 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020283 4809 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020291 4809 flags.go:64] FLAG: --manifest-url="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020296 4809 flags.go:64] FLAG: --manifest-url-header="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020303 4809 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020308 4809 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020318 4809 flags.go:64] FLAG: --max-pods="110" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020323 4809 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020328 4809 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020333 4809 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020338 4809 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020343 4809 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020348 4809 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020353 4809 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020376 4809 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020381 4809 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020386 4809 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020393 4809 flags.go:64] FLAG: --pod-cidr="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020398 4809 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020410 4809 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020415 4809 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020420 4809 flags.go:64] FLAG: --pods-per-core="0" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020425 4809 flags.go:64] FLAG: --port="10250" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020431 4809 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020436 4809 flags.go:64] FLAG: --provider-id="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020441 4809 flags.go:64] FLAG: --qos-reserved="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020445 4809 flags.go:64] FLAG: --read-only-port="10255" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020450 4809 flags.go:64] FLAG: --register-node="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020462 4809 flags.go:64] FLAG: --register-schedulable="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020468 4809 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020481 4809 flags.go:64] FLAG: --registry-burst="10" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020486 4809 flags.go:64] FLAG: --registry-qps="5" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020491 4809 flags.go:64] FLAG: --reserved-cpus="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020496 4809 flags.go:64] FLAG: --reserved-memory="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020503 4809 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020508 4809 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020514 4809 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020520 4809 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020525 4809 flags.go:64] FLAG: --runonce="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020531 4809 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020536 4809 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020541 4809 flags.go:64] FLAG: --seccomp-default="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020547 4809 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020552 4809 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020558 4809 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020564 4809 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020569 4809 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020575 4809 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020580 4809 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020585 4809 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020591 4809 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020596 4809 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020602 4809 flags.go:64] FLAG: --system-cgroups="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020607 4809 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020615 4809 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020621 4809 flags.go:64] FLAG: --tls-cert-file="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020625 4809 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020636 4809 flags.go:64] FLAG: --tls-min-version="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020641 4809 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020645 4809 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020651 4809 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020657 4809 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020662 4809 flags.go:64] FLAG: --v="2" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020669 4809 flags.go:64] FLAG: --version="false" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020684 4809 flags.go:64] FLAG: --vmodule="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020690 4809 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.020696 4809 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020829 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020836 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020841 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020847 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020852 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020857 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020862 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020866 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020870 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020874 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020879 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020883 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020887 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020891 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020895 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020901 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020907 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020911 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020915 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020920 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020924 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020928 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020932 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020936 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020940 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020945 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020949 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020954 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020958 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020962 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020966 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020970 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020975 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020989 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020993 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.020998 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021002 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021024 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021030 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021035 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021039 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021044 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021048 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021053 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021057 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021062 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021066 4809 feature_gate.go:330] unrecognized feature gate: Example Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021070 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021075 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021079 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021083 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021087 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021092 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021096 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021101 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021106 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021111 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021115 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021125 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021130 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021134 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021138 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021145 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021150 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021155 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021160 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021164 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021169 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021174 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021187 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.021192 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.021243 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.033331 4809 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.033387 4809 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034091 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034274 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034294 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034305 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034314 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034320 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034325 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034331 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034336 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034344 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034350 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034355 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034361 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034372 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034377 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034382 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034386 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034391 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034396 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034401 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034405 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034410 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034417 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034424 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034430 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034439 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034448 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034453 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034457 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034463 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034468 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034472 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034478 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034482 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034488 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034493 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034498 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034502 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034510 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034514 4809 feature_gate.go:330] unrecognized feature gate: Example Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034532 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034537 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034544 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034550 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034554 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034559 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034586 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034593 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034600 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034606 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034612 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034618 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034624 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034630 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034638 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034686 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034698 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034705 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034710 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034715 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034721 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034730 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034735 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034740 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034746 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.034751 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035444 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035468 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035474 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035479 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035486 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.035497 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035720 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035731 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035736 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035741 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035747 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035754 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035759 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035766 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035771 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035777 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035782 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035788 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035795 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035800 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035805 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035811 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035816 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035821 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035826 4809 feature_gate.go:330] unrecognized feature gate: Example Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035831 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035835 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035840 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035845 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035849 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035854 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035861 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035866 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035872 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035878 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035883 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035889 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035895 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035900 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035905 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035909 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035914 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035920 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035924 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035929 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035934 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035938 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035943 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035949 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035954 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035959 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035963 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035967 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035972 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035980 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035986 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035991 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.035996 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036001 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036029 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036035 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036040 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036045 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036049 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036054 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036059 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036064 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036069 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036073 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036077 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036082 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036087 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036091 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036095 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036099 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036104 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.036109 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.036119 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.038143 4809 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.041944 4809 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.045687 4809 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.045871 4809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.048735 4809 server.go:997] "Starting client certificate rotation" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.048769 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.048990 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.086197 4809 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.088943 4809 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.091744 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.106757 4809 log.go:25] "Validated CRI v1 runtime API" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.149173 4809 log.go:25] "Validated CRI v1 image API" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.152442 4809 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.160130 4809 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-14-08-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.160204 4809 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.177151 4809 manager.go:217] Machine: {Timestamp:2026-02-26 14:13:42.175031307 +0000 UTC m=+0.648351850 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f486f530-323a-4284-90aa-e6ee0bb3cb0d BootID:174a06ad-2f49-4e47-8b01-2d4967845ee0 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d2:7b:ae Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d2:7b:ae Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0c:14:52 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:77:3f:0a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:33:e5:86 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a0:0f:36 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:10:3b:82:a1:01 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:e1:b0:c1:98:00 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.177396 4809 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.177655 4809 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.181136 4809 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.181335 4809 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.181367 4809 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.181569 4809 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.181580 4809 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.182247 4809 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.182294 4809 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.182587 4809 state_mem.go:36] "Initialized new in-memory state store" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.182693 4809 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.186073 4809 kubelet.go:418] "Attempting to sync node with API server" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.186109 4809 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.186173 4809 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.186191 4809 kubelet.go:324] "Adding apiserver pod source" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.186208 4809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.191298 4809 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.192726 4809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.193182 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.193244 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.193311 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.193373 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.194836 4809 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.196985 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197032 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197043 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197053 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197067 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197077 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197085 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197098 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197112 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197121 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197134 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.197142 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.198493 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.198827 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.199061 4809 server.go:1280] "Started kubelet" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.199274 4809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.199432 4809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.200003 4809 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 14:13:42 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.202845 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.203137 4809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.204530 4809 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.204554 4809 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.204639 4809 server.go:460] "Adding debug handlers to kubelet server" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.204718 4809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.205665 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.207566 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.207693 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.207800 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="200ms" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.212743 4809 factory.go:55] Registering systemd factory Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.212766 4809 factory.go:221] Registration of the systemd container factory successfully Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.213226 4809 factory.go:153] Registering CRI-O factory Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.213248 4809 factory.go:221] Registration of the crio container factory successfully Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.213317 4809 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.213348 4809 factory.go:103] Registering Raw factory Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.213364 4809 manager.go:1196] Started watching for new ooms in manager Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.214238 4809 manager.go:319] Starting recovery of all containers Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.218234 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897d167d03c4b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,LastTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220747 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220820 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220844 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220865 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220884 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220904 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220922 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220940 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220963 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220980 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.220998 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221050 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221069 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221094 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221113 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221129 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221146 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221172 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221189 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221209 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221230 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221248 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221264 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221281 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221297 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221318 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221339 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221357 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221377 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221395 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221412 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221460 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221480 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221497 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221516 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221534 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221552 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221571 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221589 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221645 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221663 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221682 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221699 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221717 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221734 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221751 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221770 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221788 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221806 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221823 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221840 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221858 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221883 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221903 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221924 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221942 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221964 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.221987 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226441 4809 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226487 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226514 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226537 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226558 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226580 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226599 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226619 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226639 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226658 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226677 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226730 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226750 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226768 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226789 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226810 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226828 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226848 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226867 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226886 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226903 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226921 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226938 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226958 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226976 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.226993 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227042 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227068 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227086 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227106 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227128 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227147 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227165 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227184 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227207 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227232 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227257 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227282 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227305 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227330 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227355 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227376 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227395 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227414 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227433 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227452 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227470 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227497 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227518 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227540 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227560 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227585 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227613 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227640 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227666 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227694 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227742 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227771 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227789 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227815 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227834 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227856 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227876 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227898 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227916 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227935 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227953 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227973 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.227993 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228010 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228051 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228069 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228094 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228112 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228131 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228180 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228197 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228216 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228234 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228252 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228271 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228293 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228310 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228327 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228346 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228364 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228381 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228399 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228416 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228434 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228451 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228469 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228486 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228504 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228525 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228545 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228563 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228583 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228601 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228619 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228637 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228655 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228672 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228692 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228709 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228727 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228745 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228763 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228783 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228802 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228820 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228839 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228856 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228875 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228892 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228910 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228930 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228948 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228965 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.228989 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229041 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229071 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229097 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229123 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229140 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229159 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229177 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229197 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229229 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229266 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229285 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229311 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229331 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229349 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229367 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229385 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229404 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229421 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229439 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229456 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229473 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229493 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229512 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229531 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229550 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229568 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229585 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229605 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229622 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229640 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229658 4809 reconstruct.go:97] "Volume reconstruction finished" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.229670 4809 reconciler.go:26] "Reconciler: start to sync state" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.233762 4809 manager.go:324] Recovery completed Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.243981 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.245591 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.245656 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.245669 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.247096 4809 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.247124 4809 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.247148 4809 state_mem.go:36] "Initialized new in-memory state store" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.252718 4809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.255356 4809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.255402 4809 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.255425 4809 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.255468 4809 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.260478 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.260549 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.264061 4809 policy_none.go:49] "None policy: Start" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.265076 4809 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.265132 4809 state_mem.go:35] "Initializing new in-memory state store" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.306449 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.330722 4809 manager.go:334] "Starting Device Plugin manager" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.330918 4809 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.330934 4809 server.go:79] "Starting device plugin registration server" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.331299 4809 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.331366 4809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.331485 4809 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.331691 4809 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.331703 4809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.340678 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.355939 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.356144 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357262 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357306 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357431 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357667 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.357713 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358227 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358289 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358350 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358394 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358494 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358590 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.358634 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359148 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359176 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359287 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359472 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359539 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359503 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359598 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.359923 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360040 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360088 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360140 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360714 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360765 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360783 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360792 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360904 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.360938 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.363091 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.363119 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.363130 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.408893 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="400ms" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431552 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431640 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431672 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431715 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431730 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.431948 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432137 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432190 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432250 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432279 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432756 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432778 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432787 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.432812 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.433222 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534344 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534363 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534421 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534464 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534482 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534503 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534523 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534564 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534533 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534546 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534632 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534677 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534771 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534780 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534788 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534812 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534818 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534809 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534870 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534925 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.534957 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.634195 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.635512 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.635554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.635565 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.635594 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.636112 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.690739 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.714597 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.731651 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.733634 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dbea199127df4f5253fadcc114f68a5ab2265c115768771451f5744d7587658c WatchSource:0}: Error finding container dbea199127df4f5253fadcc114f68a5ab2265c115768771451f5744d7587658c: Status 404 returned error can't find the container with id dbea199127df4f5253fadcc114f68a5ab2265c115768771451f5744d7587658c Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.740478 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: I0226 14:13:42.746594 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.754806 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1d6bed7ce5dd11204e8db9bb8c4b12675578b17d75d7bf60f190af28ed7210d1 WatchSource:0}: Error finding container 1d6bed7ce5dd11204e8db9bb8c4b12675578b17d75d7bf60f190af28ed7210d1: Status 404 returned error can't find the container with id 1d6bed7ce5dd11204e8db9bb8c4b12675578b17d75d7bf60f190af28ed7210d1 Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.758613 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e147cc7ff583faafadc534565f74086f49dfef8c93a91f25462cdf135870c8ea WatchSource:0}: Error finding container e147cc7ff583faafadc534565f74086f49dfef8c93a91f25462cdf135870c8ea: Status 404 returned error can't find the container with id e147cc7ff583faafadc534565f74086f49dfef8c93a91f25462cdf135870c8ea Feb 26 14:13:42 crc kubenswrapper[4809]: W0226 14:13:42.765869 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-08ff474260018605a6f272dd567f0b5681192802e5b4f3fa3eba57957159e5d1 WatchSource:0}: Error finding container 08ff474260018605a6f272dd567f0b5681192802e5b4f3fa3eba57957159e5d1: Status 404 returned error can't find the container with id 08ff474260018605a6f272dd567f0b5681192802e5b4f3fa3eba57957159e5d1 Feb 26 14:13:42 crc kubenswrapper[4809]: E0226 14:13:42.809902 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="800ms" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.036882 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.038807 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.038875 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.038887 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.038918 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.039573 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Feb 26 14:13:43 crc kubenswrapper[4809]: W0226 14:13:43.136564 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.136657 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.200165 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.265156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"08ff474260018605a6f272dd567f0b5681192802e5b4f3fa3eba57957159e5d1"} Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.266382 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e147cc7ff583faafadc534565f74086f49dfef8c93a91f25462cdf135870c8ea"} Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.267439 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d6bed7ce5dd11204e8db9bb8c4b12675578b17d75d7bf60f190af28ed7210d1"} Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.268975 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"446c5294b384fe5cb272eb1e46f4d5823a4627ec0d95f654712c4acad7a49013"} Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.270997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dbea199127df4f5253fadcc114f68a5ab2265c115768771451f5744d7587658c"} Feb 26 14:13:43 crc kubenswrapper[4809]: W0226 14:13:43.342780 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.342906 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:43 crc kubenswrapper[4809]: W0226 14:13:43.410121 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.410278 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.610984 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="1.6s" Feb 26 14:13:43 crc kubenswrapper[4809]: W0226 14:13:43.746412 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.746508 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.839749 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.841238 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.841281 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.841293 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:43 crc kubenswrapper[4809]: I0226 14:13:43.841322 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:43 crc kubenswrapper[4809]: E0226 14:13:43.841931 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.106048 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 14:13:44 crc kubenswrapper[4809]: E0226 14:13:44.107049 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.200338 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.275561 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed" exitCode=0 Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.275610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.275686 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.276670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.276701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.276721 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.277336 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8" exitCode=0 Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.277421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.277445 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.278229 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.278264 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.278278 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.278564 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281401 4809 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b" exitCode=0 Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281622 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281740 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.281778 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.283378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.283422 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.283442 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.284476 4809 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572" exitCode=0 Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.284559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.284578 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.285405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.285439 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.285453 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.287941 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.287979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.287994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.288024 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2"} Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.288031 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.288871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.288902 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:44 crc kubenswrapper[4809]: I0226 14:13:44.288914 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: W0226 14:13:45.169879 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:45 crc kubenswrapper[4809]: E0226 14:13:45.169960 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.200406 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Feb 26 14:13:45 crc kubenswrapper[4809]: E0226 14:13:45.212289 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="3.2s" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.293893 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.293946 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.293982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.293990 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.295028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.295076 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.295087 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.297790 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.297828 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.297838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.297848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.300111 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803" exitCode=0 Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.300165 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.300332 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.301132 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.301148 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.301156 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.312120 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.312141 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.312310 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f"} Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.313577 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.313793 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.313804 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.314136 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.314150 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.314159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.442911 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.444812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.444859 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.444871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.444896 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:45 crc kubenswrapper[4809]: E0226 14:13:45.445446 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Feb 26 14:13:45 crc kubenswrapper[4809]: I0226 14:13:45.600337 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.318582 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbb66341a18f364919534a8098403909f912d6ef0c7eeddc581eee4f11e87781"} Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.318742 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.319662 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.319700 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.319713 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.322148 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b" exitCode=0 Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.322220 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b"} Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.322321 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.322371 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.322378 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323372 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323432 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323482 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323513 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323567 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323578 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.323586 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:46 crc kubenswrapper[4809]: I0226 14:13:46.384001 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.324963 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.325529 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.325561 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326210 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326231 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326240 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:47 crc kubenswrapper[4809]: I0226 14:13:47.326978 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.116140 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.207997 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.330780 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175"} Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.330823 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06"} Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.330877 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.331783 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.331811 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.331822 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.646601 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.647739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.647783 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.647795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:48 crc kubenswrapper[4809]: I0226 14:13:48.647817 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.287835 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.340855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290"} Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.340926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515"} Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.340946 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e"} Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.340944 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.340957 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342446 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:49 crc kubenswrapper[4809]: I0226 14:13:49.342576 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.343398 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.343514 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344426 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344486 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344787 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:50 crc kubenswrapper[4809]: I0226 14:13:50.344799 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.059141 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.059472 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.060338 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.061946 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.062072 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.062093 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.170997 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.345910 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.345966 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347132 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347207 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347226 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.347250 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.818261 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:51 crc kubenswrapper[4809]: I0226 14:13:51.827545 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:52 crc kubenswrapper[4809]: E0226 14:13:52.341175 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.347194 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.348057 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.348096 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.348108 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.509343 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.509576 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.511120 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.511170 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:52 crc kubenswrapper[4809]: I0226 14:13:52.511182 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.350724 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.352515 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.352567 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.352579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.357869 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:53 crc kubenswrapper[4809]: I0226 14:13:53.457375 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:13:54 crc kubenswrapper[4809]: I0226 14:13:54.353571 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:54 crc kubenswrapper[4809]: I0226 14:13:54.355275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:54 crc kubenswrapper[4809]: I0226 14:13:54.355342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:54 crc kubenswrapper[4809]: I0226 14:13:54.355356 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:55 crc kubenswrapper[4809]: I0226 14:13:55.355822 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:13:55 crc kubenswrapper[4809]: I0226 14:13:55.356898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:13:55 crc kubenswrapper[4809]: I0226 14:13:55.356958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:13:55 crc kubenswrapper[4809]: I0226 14:13:55.356984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:13:55 crc kubenswrapper[4809]: W0226 14:13:55.810433 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 14:13:55 crc kubenswrapper[4809]: I0226 14:13:55.810607 4809 trace.go:236] Trace[762288410]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 14:13:45.809) (total time: 10001ms): Feb 26 14:13:55 crc kubenswrapper[4809]: Trace[762288410]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (14:13:55.810) Feb 26 14:13:55 crc kubenswrapper[4809]: Trace[762288410]: [10.00102562s] [10.00102562s] END Feb 26 14:13:55 crc kubenswrapper[4809]: E0226 14:13:55.810641 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 14:13:56 crc kubenswrapper[4809]: W0226 14:13:56.128594 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 14:13:56 crc kubenswrapper[4809]: I0226 14:13:56.128720 4809 trace.go:236] Trace[1695789521]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 14:13:46.126) (total time: 10002ms): Feb 26 14:13:56 crc kubenswrapper[4809]: Trace[1695789521]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:13:56.128) Feb 26 14:13:56 crc kubenswrapper[4809]: Trace[1695789521]: [10.002067989s] [10.002067989s] END Feb 26 14:13:56 crc kubenswrapper[4809]: E0226 14:13:56.128745 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 14:13:56 crc kubenswrapper[4809]: I0226 14:13:56.201122 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 26 14:13:56 crc kubenswrapper[4809]: I0226 14:13:56.458316 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:13:56 crc kubenswrapper[4809]: I0226 14:13:56.458410 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:13:56 crc kubenswrapper[4809]: W0226 14:13:56.539682 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 14:13:56 crc kubenswrapper[4809]: I0226 14:13:56.539784 4809 trace.go:236] Trace[744662137]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 14:13:46.538) (total time: 10001ms): Feb 26 14:13:56 crc kubenswrapper[4809]: Trace[744662137]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:13:56.539) Feb 26 14:13:56 crc kubenswrapper[4809]: Trace[744662137]: [10.001725638s] [10.001725638s] END Feb 26 14:13:56 crc kubenswrapper[4809]: E0226 14:13:56.539810 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 14:13:57 crc kubenswrapper[4809]: E0226 14:13:57.793835 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1897d167d03c4b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,LastTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:13:58 crc kubenswrapper[4809]: E0226 14:13:58.209932 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 14:13:58 crc kubenswrapper[4809]: E0226 14:13:58.413630 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 26 14:13:58 crc kubenswrapper[4809]: E0226 14:13:58.649234 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 26 14:13:59 crc kubenswrapper[4809]: I0226 14:13:59.288215 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:13:59 crc kubenswrapper[4809]: I0226 14:13:59.288310 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:14:00 crc kubenswrapper[4809]: W0226 14:14:00.127269 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 26 14:14:00 crc kubenswrapper[4809]: I0226 14:14:00.127412 4809 trace.go:236] Trace[1568326090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Feb-2026 14:13:50.125) (total time: 10002ms): Feb 26 14:14:00 crc kubenswrapper[4809]: Trace[1568326090]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:00.127) Feb 26 14:14:00 crc kubenswrapper[4809]: Trace[1568326090]: [10.002085571s] [10.002085571s] END Feb 26 14:14:00 crc kubenswrapper[4809]: E0226 14:14:00.127448 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.094984 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.095258 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.096999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.097089 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.097130 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.114458 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.371651 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.373387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.373483 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:01 crc kubenswrapper[4809]: I0226 14:14:01.373515 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:02 crc kubenswrapper[4809]: E0226 14:14:02.341496 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:02 crc kubenswrapper[4809]: W0226 14:14:02.771704 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z Feb 26 14:14:02 crc kubenswrapper[4809]: E0226 14:14:02.771756 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:02 crc kubenswrapper[4809]: W0226 14:14:02.773612 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z Feb 26 14:14:02 crc kubenswrapper[4809]: E0226 14:14:02.773705 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:02 crc kubenswrapper[4809]: I0226 14:14:02.775072 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 14:14:02 crc kubenswrapper[4809]: I0226 14:14:02.775141 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 14:14:02 crc kubenswrapper[4809]: I0226 14:14:02.776313 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z Feb 26 14:14:02 crc kubenswrapper[4809]: W0226 14:14:02.776546 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z Feb 26 14:14:02 crc kubenswrapper[4809]: E0226 14:14:02.776600 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:02 crc kubenswrapper[4809]: I0226 14:14:02.789631 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48634->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 26 14:14:02 crc kubenswrapper[4809]: I0226 14:14:02.789688 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48634->192.168.126.11:17697: read: connection reset by peer" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.203461 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:03Z is after 2026-02-23T05:33:13Z Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.377395 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.378884 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbb66341a18f364919534a8098403909f912d6ef0c7eeddc581eee4f11e87781" exitCode=255 Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.378931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dbb66341a18f364919534a8098403909f912d6ef0c7eeddc581eee4f11e87781"} Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.379161 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.380053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.380116 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.380131 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:03 crc kubenswrapper[4809]: I0226 14:14:03.380855 4809 scope.go:117] "RemoveContainer" containerID="dbb66341a18f364919534a8098403909f912d6ef0c7eeddc581eee4f11e87781" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.100283 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.203309 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:04Z is after 2026-02-23T05:33:13Z Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.293513 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.383287 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.383911 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.386159 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" exitCode=255 Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.386199 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8"} Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.386235 4809 scope.go:117] "RemoveContainer" containerID="dbb66341a18f364919534a8098403909f912d6ef0c7eeddc581eee4f11e87781" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.386354 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.387621 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.387650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.387659 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.388147 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:04 crc kubenswrapper[4809]: E0226 14:14:04.388329 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:04 crc kubenswrapper[4809]: I0226 14:14:04.397959 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:04 crc kubenswrapper[4809]: E0226 14:14:04.817451 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:04Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.049639 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.051550 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.051631 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.051658 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.051696 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:05 crc kubenswrapper[4809]: E0226 14:14:05.057408 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:05Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.202890 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:05Z is after 2026-02-23T05:33:13Z Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.390965 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.392752 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.393554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.393609 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.393623 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:05 crc kubenswrapper[4809]: I0226 14:14:05.394408 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:05 crc kubenswrapper[4809]: E0226 14:14:05.394621 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.203575 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:06Z is after 2026-02-23T05:33:13Z Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.395282 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.396246 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.396287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.396303 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.396890 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:06 crc kubenswrapper[4809]: E0226 14:14:06.397058 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.457842 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.457921 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:14:06 crc kubenswrapper[4809]: I0226 14:14:06.766124 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 14:14:06 crc kubenswrapper[4809]: E0226 14:14:06.772688 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:07 crc kubenswrapper[4809]: I0226 14:14:07.204496 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:07Z is after 2026-02-23T05:33:13Z Feb 26 14:14:07 crc kubenswrapper[4809]: E0226 14:14:07.798284 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:07Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897d167d03c4b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,LastTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.116939 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.117364 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.118870 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.119272 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.119473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.120469 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:08 crc kubenswrapper[4809]: E0226 14:14:08.120945 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:08 crc kubenswrapper[4809]: I0226 14:14:08.205373 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:08Z is after 2026-02-23T05:33:13Z Feb 26 14:14:09 crc kubenswrapper[4809]: I0226 14:14:09.202952 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:09Z is after 2026-02-23T05:33:13Z Feb 26 14:14:09 crc kubenswrapper[4809]: W0226 14:14:09.209131 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:09Z is after 2026-02-23T05:33:13Z Feb 26 14:14:09 crc kubenswrapper[4809]: E0226 14:14:09.209239 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:09 crc kubenswrapper[4809]: W0226 14:14:09.946799 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:09Z is after 2026-02-23T05:33:13Z Feb 26 14:14:09 crc kubenswrapper[4809]: E0226 14:14:09.946913 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:10 crc kubenswrapper[4809]: W0226 14:14:10.111603 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:10Z is after 2026-02-23T05:33:13Z Feb 26 14:14:10 crc kubenswrapper[4809]: E0226 14:14:10.111715 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:10 crc kubenswrapper[4809]: I0226 14:14:10.203316 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:10Z is after 2026-02-23T05:33:13Z Feb 26 14:14:11 crc kubenswrapper[4809]: I0226 14:14:11.204414 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:11Z is after 2026-02-23T05:33:13Z Feb 26 14:14:11 crc kubenswrapper[4809]: E0226 14:14:11.821159 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:11Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.057908 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.059480 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.059535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.059552 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.059588 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:12 crc kubenswrapper[4809]: E0226 14:14:12.064516 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:12Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 14:14:12 crc kubenswrapper[4809]: I0226 14:14:12.202538 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:12Z is after 2026-02-23T05:33:13Z Feb 26 14:14:12 crc kubenswrapper[4809]: E0226 14:14:12.341738 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:12 crc kubenswrapper[4809]: W0226 14:14:12.416858 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:12Z is after 2026-02-23T05:33:13Z Feb 26 14:14:12 crc kubenswrapper[4809]: E0226 14:14:12.416993 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 14:14:13 crc kubenswrapper[4809]: I0226 14:14:13.202944 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.100425 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.100614 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.101642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.101702 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.101725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.102493 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:14 crc kubenswrapper[4809]: E0226 14:14:14.102735 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.202857 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:14Z is after 2026-02-23T05:33:13Z Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.715826 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47930->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.715910 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47930->192.168.126.11:10357: read: connection reset by peer" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.715973 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.716173 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.717294 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.717349 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.717365 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.718122 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 14:14:14 crc kubenswrapper[4809]: I0226 14:14:14.718298 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418" gracePeriod=30 Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.206290 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:15Z is after 2026-02-23T05:33:13Z Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.419219 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.419927 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418" exitCode=255 Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.419996 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418"} Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.420312 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4"} Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.420570 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.421739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.421924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:15 crc kubenswrapper[4809]: I0226 14:14:15.422083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:16 crc kubenswrapper[4809]: I0226 14:14:16.204160 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:16Z is after 2026-02-23T05:33:13Z Feb 26 14:14:16 crc kubenswrapper[4809]: I0226 14:14:16.423410 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:16 crc kubenswrapper[4809]: I0226 14:14:16.425334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:16 crc kubenswrapper[4809]: I0226 14:14:16.425405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:16 crc kubenswrapper[4809]: I0226 14:14:16.425418 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:17 crc kubenswrapper[4809]: I0226 14:14:17.202611 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:17Z is after 2026-02-23T05:33:13Z Feb 26 14:14:17 crc kubenswrapper[4809]: E0226 14:14:17.801831 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:17Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897d167d03c4b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,LastTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:18 crc kubenswrapper[4809]: I0226 14:14:18.203036 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:18Z is after 2026-02-23T05:33:13Z Feb 26 14:14:18 crc kubenswrapper[4809]: E0226 14:14:18.825340 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:18Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.065226 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.066571 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.066633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.066651 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.066686 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:19 crc kubenswrapper[4809]: E0226 14:14:19.070161 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:19Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 14:14:19 crc kubenswrapper[4809]: I0226 14:14:19.204646 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:19Z is after 2026-02-23T05:33:13Z Feb 26 14:14:20 crc kubenswrapper[4809]: I0226 14:14:20.203139 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:20Z is after 2026-02-23T05:33:13Z Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.059482 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.059665 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.060812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.060881 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.060894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:21 crc kubenswrapper[4809]: I0226 14:14:21.202307 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:21Z is after 2026-02-23T05:33:13Z Feb 26 14:14:22 crc kubenswrapper[4809]: W0226 14:14:22.024657 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 26 14:14:22 crc kubenswrapper[4809]: E0226 14:14:22.025590 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 14:14:22 crc kubenswrapper[4809]: I0226 14:14:22.204949 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:22 crc kubenswrapper[4809]: E0226 14:14:22.341916 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.176100 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.194281 4809 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.204171 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.457692 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.458080 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.460070 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.460133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:23 crc kubenswrapper[4809]: I0226 14:14:23.460153 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:24 crc kubenswrapper[4809]: I0226 14:14:24.207850 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:25 crc kubenswrapper[4809]: I0226 14:14:25.205633 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:25 crc kubenswrapper[4809]: E0226 14:14:25.827675 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 14:14:25 crc kubenswrapper[4809]: W0226 14:14:25.866393 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 14:14:25 crc kubenswrapper[4809]: E0226 14:14:25.866471 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.070500 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.072568 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.072644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.072668 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.072709 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:26 crc kubenswrapper[4809]: E0226 14:14:26.081006 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.206989 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.458655 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:14:26 crc kubenswrapper[4809]: I0226 14:14:26.458723 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:14:27 crc kubenswrapper[4809]: I0226 14:14:27.206549 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.810436 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d03c4b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,LastTimestamp:2026-02-26 14:13:42.199028624 +0000 UTC m=+0.672349147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.815760 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.819803 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.828096 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.833320 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d852dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.334725827 +0000 UTC m=+0.808046350,LastTimestamp:2026-02-26 14:13:42.334725827 +0000 UTC m=+0.808046350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.839114 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.357287464 +0000 UTC m=+0.830607987,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.844793 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.357302044 +0000 UTC m=+0.830622567,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.850645 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.357312935 +0000 UTC m=+0.830633458,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.856928 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.358257414 +0000 UTC m=+0.831577937,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.861670 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.358283776 +0000 UTC m=+0.831604299,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.866893 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.358295446 +0000 UTC m=+0.831615969,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.871073 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.358377589 +0000 UTC m=+0.831698112,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.875180 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.35839222 +0000 UTC m=+0.831712743,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.879694 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.358409211 +0000 UTC m=+0.831729734,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.884731 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.359165953 +0000 UTC m=+0.832486486,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.888898 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.359182233 +0000 UTC m=+0.832502756,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.893389 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.359191674 +0000 UTC m=+0.832512207,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.897225 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.359496456 +0000 UTC m=+0.832816979,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.900957 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.35958882 +0000 UTC m=+0.832909363,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.906599 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.359605501 +0000 UTC m=+0.832926024,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.910854 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.359894773 +0000 UTC m=+0.833215296,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.915991 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.359919464 +0000 UTC m=+0.833239987,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.920379 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d304116b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d304116b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245675371 +0000 UTC m=+0.718995894,LastTimestamp:2026-02-26 14:13:42.359928065 +0000 UTC m=+0.833248588,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.924253 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d3036665\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d3036665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245631589 +0000 UTC m=+0.718952112,LastTimestamp:2026-02-26 14:13:42.360373323 +0000 UTC m=+0.833693846,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.928400 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897d167d303eb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897d167d303eb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.245665621 +0000 UTC m=+0.718986144,LastTimestamp:2026-02-26 14:13:42.360386164 +0000 UTC m=+0.833706687,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.934403 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d167f09c4690 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.742189712 +0000 UTC m=+1.215510235,LastTimestamp:2026-02-26 14:13:42.742189712 +0000 UTC m=+1.215510235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.938252 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d167f151aef3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.754078451 +0000 UTC m=+1.227398984,LastTimestamp:2026-02-26 14:13:42.754078451 +0000 UTC m=+1.227398984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.942117 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d167f190af2a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.758207274 +0000 UTC m=+1.231527797,LastTimestamp:2026-02-26 14:13:42.758207274 +0000 UTC m=+1.231527797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.950611 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d167f1b63a63 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.760667747 +0000 UTC m=+1.233988270,LastTimestamp:2026-02-26 14:13:42.760667747 +0000 UTC m=+1.233988270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.955274 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d167f26e503b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:42.772731963 +0000 UTC m=+1.246052486,LastTimestamp:2026-02-26 14:13:42.772731963 +0000 UTC m=+1.246052486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.960930 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d168179395bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.395931579 +0000 UTC m=+1.869252132,LastTimestamp:2026-02-26 14:13:43.395931579 +0000 UTC m=+1.869252132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.965669 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16817952277 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.396033143 +0000 UTC m=+1.869353676,LastTimestamp:2026-02-26 14:13:43.396033143 +0000 UTC m=+1.869353676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.966978 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1681796c97e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.396141438 +0000 UTC m=+1.869461971,LastTimestamp:2026-02-26 14:13:43.396141438 +0000 UTC m=+1.869461971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.970062 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d1681796df9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.396147098 +0000 UTC m=+1.869467631,LastTimestamp:2026-02-26 14:13:43.396147098 +0000 UTC m=+1.869467631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.973380 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d16817998e63 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.396322915 +0000 UTC m=+1.869643448,LastTimestamp:2026-02-26 14:13:43.396322915 +0000 UTC m=+1.869643448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.979430 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1681862dd8f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.409515919 +0000 UTC m=+1.882836442,LastTimestamp:2026-02-26 14:13:43.409515919 +0000 UTC m=+1.882836442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.986063 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d168188b18c7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.412152519 +0000 UTC m=+1.885473052,LastTimestamp:2026-02-26 14:13:43.412152519 +0000 UTC m=+1.885473052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.990560 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16818a0b01f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.413567519 +0000 UTC m=+1.886888052,LastTimestamp:2026-02-26 14:13:43.413567519 +0000 UTC m=+1.886888052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:27 crc kubenswrapper[4809]: E0226 14:14:27.995791 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d16818c299f8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.415790072 +0000 UTC m=+1.889110595,LastTimestamp:2026-02-26 14:13:43.415790072 +0000 UTC m=+1.889110595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.002774 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d16818d5bd64 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.417044324 +0000 UTC m=+1.890364887,LastTimestamp:2026-02-26 14:13:43.417044324 +0000 UTC m=+1.890364887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.009776 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16818d5e687 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.417054855 +0000 UTC m=+1.890375468,LastTimestamp:2026-02-26 14:13:43.417054855 +0000 UTC m=+1.890375468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.017166 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1682b8430b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.730466992 +0000 UTC m=+2.203787515,LastTimestamp:2026-02-26 14:13:43.730466992 +0000 UTC m=+2.203787515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.022832 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1682c367a87 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.742151303 +0000 UTC m=+2.215471846,LastTimestamp:2026-02-26 14:13:43.742151303 +0000 UTC m=+2.215471846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.027507 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1682c4b5be8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.74351972 +0000 UTC m=+2.216840243,LastTimestamp:2026-02-26 14:13:43.74351972 +0000 UTC m=+2.216840243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.033716 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16837ae1e16 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.934541334 +0000 UTC m=+2.407861907,LastTimestamp:2026-02-26 14:13:43.934541334 +0000 UTC m=+2.407861907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.039317 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d168387fa811 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.948273681 +0000 UTC m=+2.421594214,LastTimestamp:2026-02-26 14:13:43.948273681 +0000 UTC m=+2.421594214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.044711 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1683891aad8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.94945404 +0000 UTC m=+2.422774573,LastTimestamp:2026-02-26 14:13:43.94945404 +0000 UTC m=+2.422774573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.049509 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d168434425f0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.12892312 +0000 UTC m=+2.602243643,LastTimestamp:2026-02-26 14:13:44.12892312 +0000 UTC m=+2.602243643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.058216 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d168446714b9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.147989689 +0000 UTC m=+2.621310252,LastTimestamp:2026-02-26 14:13:44.147989689 +0000 UTC m=+2.621310252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.064168 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1684c2cf78d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.278398861 +0000 UTC m=+2.751719394,LastTimestamp:2026-02-26 14:13:44.278398861 +0000 UTC m=+2.751719394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.068804 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1684c50b761 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.280741729 +0000 UTC m=+2.754062252,LastTimestamp:2026-02-26 14:13:44.280741729 +0000 UTC m=+2.754062252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.071268 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d1684c96d33e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.285336382 +0000 UTC m=+2.758656895,LastTimestamp:2026-02-26 14:13:44.285336382 +0000 UTC m=+2.758656895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.076171 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d1684cdbb1bf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.289849791 +0000 UTC m=+2.763170314,LastTimestamp:2026-02-26 14:13:44.289849791 +0000 UTC m=+2.763170314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.081026 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d168588aec05 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.485882885 +0000 UTC m=+2.959203408,LastTimestamp:2026-02-26 14:13:44.485882885 +0000 UTC m=+2.959203408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.085223 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d168588cadba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.48599801 +0000 UTC m=+2.959318533,LastTimestamp:2026-02-26 14:13:44.48599801 +0000 UTC m=+2.959318533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.089139 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d168588e6bc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.486112195 +0000 UTC m=+2.959432718,LastTimestamp:2026-02-26 14:13:44.486112195 +0000 UTC m=+2.959432718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.093165 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d168588e958f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.486122895 +0000 UTC m=+2.959443418,LastTimestamp:2026-02-26 14:13:44.486122895 +0000 UTC m=+2.959443418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.096896 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897d16859717895 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.500992149 +0000 UTC m=+2.974312672,LastTimestamp:2026-02-26 14:13:44.500992149 +0000 UTC m=+2.974312672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.100900 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d168597178a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.500992169 +0000 UTC m=+2.974312692,LastTimestamp:2026-02-26 14:13:44.500992169 +0000 UTC m=+2.974312692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.106219 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d1685978dd2a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.50147665 +0000 UTC m=+2.974797173,LastTimestamp:2026-02-26 14:13:44.50147665 +0000 UTC m=+2.974797173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.110572 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d168597fb49a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.501925018 +0000 UTC m=+2.975245541,LastTimestamp:2026-02-26 14:13:44.501925018 +0000 UTC m=+2.975245541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.114957 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d1685995f324 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.50338282 +0000 UTC m=+2.976703343,LastTimestamp:2026-02-26 14:13:44.50338282 +0000 UTC m=+2.976703343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.119867 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16859985f0c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.503541516 +0000 UTC m=+2.976862049,LastTimestamp:2026-02-26 14:13:44.503541516 +0000 UTC m=+2.976862049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.124498 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d168650d3e42 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.69575021 +0000 UTC m=+3.169070743,LastTimestamp:2026-02-26 14:13:44.69575021 +0000 UTC m=+3.169070743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.129042 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d16865442831 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.699349041 +0000 UTC m=+3.172669564,LastTimestamp:2026-02-26 14:13:44.699349041 +0000 UTC m=+3.172669564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.133567 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d16865c9a63e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.708097598 +0000 UTC m=+3.181418121,LastTimestamp:2026-02-26 14:13:44.708097598 +0000 UTC m=+3.181418121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.137525 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d16865da4df0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.709189104 +0000 UTC m=+3.182509637,LastTimestamp:2026-02-26 14:13:44.709189104 +0000 UTC m=+3.182509637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.143196 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d168665dea72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.717814386 +0000 UTC m=+3.191134909,LastTimestamp:2026-02-26 14:13:44.717814386 +0000 UTC m=+3.191134909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.147198 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d168668686db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.720475867 +0000 UTC m=+3.193796390,LastTimestamp:2026-02-26 14:13:44.720475867 +0000 UTC m=+3.193796390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.153649 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687247f694 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.917702292 +0000 UTC m=+3.391022815,LastTimestamp:2026-02-26 14:13:44.917702292 +0000 UTC m=+3.391022815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.159769 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d168726c5296 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.920085142 +0000 UTC m=+3.393405665,LastTimestamp:2026-02-26 14:13:44.920085142 +0000 UTC m=+3.393405665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.163795 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897d16873b44ee4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.941580004 +0000 UTC m=+3.414900537,LastTimestamp:2026-02-26 14:13:44.941580004 +0000 UTC m=+3.414900537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.169170 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687400c1c0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.946590144 +0000 UTC m=+3.419910667,LastTimestamp:2026-02-26 14:13:44.946590144 +0000 UTC m=+3.419910667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.174033 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687410b412 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:44.947635218 +0000 UTC m=+3.420955741,LastTimestamp:2026-02-26 14:13:44.947635218 +0000 UTC m=+3.420955741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.177971 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687d41b6f7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.101842167 +0000 UTC m=+3.575162680,LastTimestamp:2026-02-26 14:13:45.101842167 +0000 UTC m=+3.575162680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.184852 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687e32cfe0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.11764272 +0000 UTC m=+3.590963243,LastTimestamp:2026-02-26 14:13:45.11764272 +0000 UTC m=+3.590963243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.190199 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687e448167 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.118802279 +0000 UTC m=+3.592122792,LastTimestamp:2026-02-26 14:13:45.118802279 +0000 UTC m=+3.592122792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.195830 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1688914b50e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.30021915 +0000 UTC m=+3.773539673,LastTimestamp:2026-02-26 14:13:45.30021915 +0000 UTC m=+3.773539673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.200612 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16889965ba0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.308715936 +0000 UTC m=+3.782036459,LastTimestamp:2026-02-26 14:13:45.308715936 +0000 UTC m=+3.782036459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.201116 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.203154 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d16889e11e48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.313615432 +0000 UTC m=+3.786935945,LastTimestamp:2026-02-26 14:13:45.313615432 +0000 UTC m=+3.786935945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.205949 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d168943df716 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.487472406 +0000 UTC m=+3.960792919,LastTimestamp:2026-02-26 14:13:45.487472406 +0000 UTC m=+3.960792919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.209970 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16895012516 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.500263702 +0000 UTC m=+3.973584225,LastTimestamp:2026-02-26 14:13:45.500263702 +0000 UTC m=+3.973584225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.216086 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d168c62de43d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:46.325279805 +0000 UTC m=+4.798600338,LastTimestamp:2026-02-26 14:13:46.325279805 +0000 UTC m=+4.798600338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.224930 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1692960cda1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:47.989560737 +0000 UTC m=+6.462881260,LastTimestamp:2026-02-26 14:13:47.989560737 +0000 UTC m=+6.462881260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.232581 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1692a0abd9a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.000697754 +0000 UTC m=+6.474018307,LastTimestamp:2026-02-26 14:13:48.000697754 +0000 UTC m=+6.474018307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.237902 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1692a1ffdc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.002090433 +0000 UTC m=+6.475410956,LastTimestamp:2026-02-26 14:13:48.002090433 +0000 UTC m=+6.475410956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.244256 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1693545512e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.189085998 +0000 UTC m=+6.662406561,LastTimestamp:2026-02-26 14:13:48.189085998 +0000 UTC m=+6.662406561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.248917 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1693666af49 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.208049993 +0000 UTC m=+6.681370516,LastTimestamp:2026-02-26 14:13:48.208049993 +0000 UTC m=+6.681370516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.253275 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d169367401c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.20892308 +0000 UTC m=+6.682243603,LastTimestamp:2026-02-26 14:13:48.20892308 +0000 UTC m=+6.682243603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.256293 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.257367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.257396 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.257413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.258064 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.258229 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d169408a9637 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.378175031 +0000 UTC m=+6.851495554,LastTimestamp:2026-02-26 14:13:48.378175031 +0000 UTC m=+6.851495554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.262756 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d169413f8788 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.390033288 +0000 UTC m=+6.863353811,LastTimestamp:2026-02-26 14:13:48.390033288 +0000 UTC m=+6.863353811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.266955 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d169415489ce openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.391410126 +0000 UTC m=+6.864730649,LastTimestamp:2026-02-26 14:13:48.391410126 +0000 UTC m=+6.864730649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.274443 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1694b4be201 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.558615041 +0000 UTC m=+7.031935584,LastTimestamp:2026-02-26 14:13:48.558615041 +0000 UTC m=+7.031935584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.278151 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1694c20200a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.572524554 +0000 UTC m=+7.045845077,LastTimestamp:2026-02-26 14:13:48.572524554 +0000 UTC m=+7.045845077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.281908 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d1694c33cff1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.573814769 +0000 UTC m=+7.047135292,LastTimestamp:2026-02-26 14:13:48.573814769 +0000 UTC m=+7.047135292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.286780 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16956cf6917 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.751784215 +0000 UTC m=+7.225104738,LastTimestamp:2026-02-26 14:13:48.751784215 +0000 UTC m=+7.225104738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.290599 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897d16957d5813a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:48.768960826 +0000 UTC m=+7.242281379,LastTimestamp:2026-02-26 14:13:48.768960826 +0000 UTC m=+7.242281379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.297128 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2228e1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458389926 +0000 UTC m=+14.931710459,LastTimestamp:2026-02-26 14:13:56.458389926 +0000 UTC m=+14.931710459,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.301365 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2229d004 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458450948 +0000 UTC m=+14.931771481,LastTimestamp:2026-02-26 14:13:56.458450948 +0000 UTC m=+14.931771481,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.305992 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.1897d16bcad5ba41 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:59.288289857 +0000 UTC m=+17.761610380,LastTimestamp:2026-02-26 14:13:59.288289857 +0000 UTC m=+17.761610380,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.309674 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d16bcad74b52 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:59.28839253 +0000 UTC m=+17.761713053,LastTimestamp:2026-02-26 14:13:59.28839253 +0000 UTC m=+17.761713053,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.313359 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.1897d16c9aaa85be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 14:14:28 crc kubenswrapper[4809]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 14:14:28 crc kubenswrapper[4809]: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:02.775119294 +0000 UTC m=+21.248439817,LastTimestamp:2026-02-26 14:14:02.775119294 +0000 UTC m=+21.248439817,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.316934 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d16c9aab2f51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:02.775162705 +0000 UTC m=+21.248483228,LastTimestamp:2026-02-26 14:14:02.775162705 +0000 UTC m=+21.248483228,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.320587 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.1897d16c9b8896d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:48634->192.168.126.11:17697: read: connection reset by peer Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:02.789672661 +0000 UTC m=+21.262993184,LastTimestamp:2026-02-26 14:14:02.789672661 +0000 UTC m=+21.262993184,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.324762 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d16c9b8927a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48634->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:02.789709732 +0000 UTC m=+21.263030255,LastTimestamp:2026-02-26 14:14:02.789709732 +0000 UTC m=+21.263030255,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.329109 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897d1687e448167\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897d1687e448167 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:45.118802279 +0000 UTC m=+3.592122792,LastTimestamp:2026-02-26 14:14:03.381832939 +0000 UTC m=+21.855153462,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.334482 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d16b2228e1a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2228e1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458389926 +0000 UTC m=+14.931710459,LastTimestamp:2026-02-26 14:14:06.457899124 +0000 UTC m=+24.931219677,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.338461 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d16b2229d004\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2229d004 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458450948 +0000 UTC m=+14.931771481,LastTimestamp:2026-02-26 14:14:06.457961816 +0000 UTC m=+24.931282379,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.342715 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.1897d16f62642927 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:47930->192.168.126.11:10357: read: connection reset by peer Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:14.715885863 +0000 UTC m=+33.189206386,LastTimestamp:2026-02-26 14:14:14.715885863 +0000 UTC m=+33.189206386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.346400 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16f6264f769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47930->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:14.715938665 +0000 UTC m=+33.189259198,LastTimestamp:2026-02-26 14:14:14.715938665 +0000 UTC m=+33.189259198,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.351289 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16f6288aa9a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:14:14.718278298 +0000 UTC m=+33.191598821,LastTimestamp:2026-02-26 14:14:14.718278298 +0000 UTC m=+33.191598821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.354961 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d16818a0b01f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16818a0b01f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.413567519 +0000 UTC m=+1.886888052,LastTimestamp:2026-02-26 14:14:14.73552548 +0000 UTC m=+33.208846003,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.357994 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d1682b8430b0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1682b8430b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.730466992 +0000 UTC m=+2.203787515,LastTimestamp:2026-02-26 14:14:14.89403642 +0000 UTC m=+33.367356963,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.362053 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d1682c367a87\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d1682c367a87 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:43.742151303 +0000 UTC m=+2.215471846,LastTimestamp:2026-02-26 14:14:14.901770651 +0000 UTC m=+33.375091174,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.369313 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d16b2228e1a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 14:14:28 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2228e1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 14:14:28 crc kubenswrapper[4809]: body: Feb 26 14:14:28 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458389926 +0000 UTC m=+14.931710459,LastTimestamp:2026-02-26 14:14:26.458706124 +0000 UTC m=+44.932026667,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 14:14:28 crc kubenswrapper[4809]: > Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.373423 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897d16b2229d004\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897d16b2229d004 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:13:56.458450948 +0000 UTC m=+14.931771481,LastTimestamp:2026-02-26 14:14:26.458768296 +0000 UTC m=+44.932088839,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.460993 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.463717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30"} Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.463926 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.464764 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.464798 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:28 crc kubenswrapper[4809]: I0226 14:14:28.464809 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:28 crc kubenswrapper[4809]: W0226 14:14:28.943983 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 26 14:14:28 crc kubenswrapper[4809]: E0226 14:14:28.944103 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.205409 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.470487 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.471864 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.473768 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" exitCode=255 Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.473815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30"} Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.473859 4809 scope.go:117] "RemoveContainer" containerID="269d7adae03893a8c634bddc3ad0b5bdf9002c81778cacb6825fc122cfb5aed8" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.474131 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.475553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.475700 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.475716 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:29 crc kubenswrapper[4809]: I0226 14:14:29.476322 4809 scope.go:117] "RemoveContainer" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" Feb 26 14:14:29 crc kubenswrapper[4809]: E0226 14:14:29.476527 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:30 crc kubenswrapper[4809]: I0226 14:14:30.204414 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:30 crc kubenswrapper[4809]: I0226 14:14:30.478893 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 14:14:31 crc kubenswrapper[4809]: I0226 14:14:31.208574 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:32 crc kubenswrapper[4809]: I0226 14:14:32.204956 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:32 crc kubenswrapper[4809]: E0226 14:14:32.342204 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:32 crc kubenswrapper[4809]: E0226 14:14:32.834105 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.081854 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.083252 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.083357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.083383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.083429 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:33 crc kubenswrapper[4809]: E0226 14:14:33.090607 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 14:14:33 crc kubenswrapper[4809]: I0226 14:14:33.203889 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.101238 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.101446 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.103133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.103173 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.103187 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.103778 4809 scope.go:117] "RemoveContainer" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" Feb 26 14:14:34 crc kubenswrapper[4809]: E0226 14:14:34.103981 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:34 crc kubenswrapper[4809]: I0226 14:14:34.203501 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.204210 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:35 crc kubenswrapper[4809]: W0226 14:14:35.358793 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:35 crc kubenswrapper[4809]: E0226 14:14:35.358872 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.559735 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.560120 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.566966 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.567086 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.567121 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.570048 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.604709 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.604928 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.606097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.606124 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:35 crc kubenswrapper[4809]: I0226 14:14:35.606133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:36 crc kubenswrapper[4809]: I0226 14:14:36.207761 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:36 crc kubenswrapper[4809]: I0226 14:14:36.494180 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:36 crc kubenswrapper[4809]: I0226 14:14:36.495155 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:36 crc kubenswrapper[4809]: I0226 14:14:36.495222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:36 crc kubenswrapper[4809]: I0226 14:14:36.495242 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:37 crc kubenswrapper[4809]: I0226 14:14:37.206510 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.117156 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.117446 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.118585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.118647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.118661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.119391 4809 scope.go:117] "RemoveContainer" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" Feb 26 14:14:38 crc kubenswrapper[4809]: E0226 14:14:38.119655 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:38 crc kubenswrapper[4809]: I0226 14:14:38.204264 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:39 crc kubenswrapper[4809]: I0226 14:14:39.204573 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:39 crc kubenswrapper[4809]: E0226 14:14:39.839703 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.091519 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.092919 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.092963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.092976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.093006 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:40 crc kubenswrapper[4809]: E0226 14:14:40.098779 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 14:14:40 crc kubenswrapper[4809]: I0226 14:14:40.203072 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:41 crc kubenswrapper[4809]: I0226 14:14:41.204243 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:42 crc kubenswrapper[4809]: I0226 14:14:42.205384 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:42 crc kubenswrapper[4809]: E0226 14:14:42.342349 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:43 crc kubenswrapper[4809]: I0226 14:14:43.203620 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:44 crc kubenswrapper[4809]: I0226 14:14:44.205575 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:45 crc kubenswrapper[4809]: I0226 14:14:45.203629 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:46 crc kubenswrapper[4809]: I0226 14:14:46.204621 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 14:14:46 crc kubenswrapper[4809]: I0226 14:14:46.720208 4809 csr.go:261] certificate signing request csr-zvttl is approved, waiting to be issued Feb 26 14:14:46 crc kubenswrapper[4809]: I0226 14:14:46.728717 4809 csr.go:257] certificate signing request csr-zvttl is issued Feb 26 14:14:46 crc kubenswrapper[4809]: I0226 14:14:46.775707 4809 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.048152 4809 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.099866 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.101157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.101202 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.101215 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.101316 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.113780 4809 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.114182 4809 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.114215 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.118259 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.118318 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.118337 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.118364 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.118381 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:47Z","lastTransitionTime":"2026-02-26T14:14:47Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.133484 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.142579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.142612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.142623 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.142644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.142655 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:47Z","lastTransitionTime":"2026-02-26T14:14:47Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.155837 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.164660 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.164693 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.164707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.164730 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.164743 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:47Z","lastTransitionTime":"2026-02-26T14:14:47Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.173951 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.180159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.180196 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.180210 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.180234 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.180246 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:47Z","lastTransitionTime":"2026-02-26T14:14:47Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.189753 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:47Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.189853 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.189871 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.290198 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.391286 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.491833 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.592912 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.693886 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.730246 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-05 22:20:06.964859122 +0000 UTC Feb 26 14:14:47 crc kubenswrapper[4809]: I0226 14:14:47.730320 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7520h5m19.234543325s for next certificate rotation Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.794654 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.895312 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:47 crc kubenswrapper[4809]: E0226 14:14:47.995964 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.097077 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.197517 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.298570 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.399544 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.500333 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.600713 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.701348 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.802207 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:48 crc kubenswrapper[4809]: E0226 14:14:48.902706 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.003358 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.104396 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.205296 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.306073 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.407170 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.508078 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.608778 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.709848 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.810581 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:49 crc kubenswrapper[4809]: E0226 14:14:49.911237 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.011918 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.112853 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.213876 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.314589 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.414723 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.515380 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.615755 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.715921 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.816980 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:50 crc kubenswrapper[4809]: E0226 14:14:50.917866 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.018842 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.119537 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.220567 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.320843 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.421419 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.522378 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.622760 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.723257 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.823388 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:51 crc kubenswrapper[4809]: E0226 14:14:51.924355 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.024697 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.125336 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.225678 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.326043 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.343302 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.426987 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.527342 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.628144 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.728868 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.829697 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:52 crc kubenswrapper[4809]: E0226 14:14:52.930294 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.031452 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.132226 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.232571 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.256352 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.257426 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.257474 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.257487 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.258254 4809 scope.go:117] "RemoveContainer" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.332669 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.433076 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.533974 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.537548 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.539389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a"} Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.539540 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.540587 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.540619 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:53 crc kubenswrapper[4809]: I0226 14:14:53.540630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.634605 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.734871 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.836054 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:53 crc kubenswrapper[4809]: E0226 14:14:53.936854 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.037843 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.138830 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.239156 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.255709 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.256972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.257028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.257040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.339326 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.439858 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.540041 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.544383 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.545172 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.547071 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" exitCode=255 Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.547115 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a"} Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.547247 4809 scope.go:117] "RemoveContainer" containerID="117ecfddeca146d6266358a4713ef6815919068f37058aeca782d6d805688f30" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.547382 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.548463 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.548518 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.548535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:54 crc kubenswrapper[4809]: I0226 14:14:54.549565 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.549868 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.640776 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.741506 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.842401 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:54 crc kubenswrapper[4809]: E0226 14:14:54.943128 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.044265 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.144838 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.245430 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.345721 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.445995 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.546853 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: I0226 14:14:55.552137 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.647774 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.748693 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.849869 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:55 crc kubenswrapper[4809]: E0226 14:14:55.951455 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.051974 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.152626 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.253820 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.354820 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.455219 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.555808 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.656779 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.757257 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.858280 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:56 crc kubenswrapper[4809]: E0226 14:14:56.959146 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.059520 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.160297 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.261468 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.361901 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.403517 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.409135 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.409381 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.409556 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.409692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.409827 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:57Z","lastTransitionTime":"2026-02-26T14:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.425494 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.433462 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.433531 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.433557 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.433588 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.433611 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:57Z","lastTransitionTime":"2026-02-26T14:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.449598 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.459916 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.459972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.459990 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.460054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.460075 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:57Z","lastTransitionTime":"2026-02-26T14:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.475651 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.486192 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.486236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.486258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.486284 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:14:57 crc kubenswrapper[4809]: I0226 14:14:57.486302 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:14:57Z","lastTransitionTime":"2026-02-26T14:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.502335 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.502668 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.502719 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.603198 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.703978 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.805058 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:57 crc kubenswrapper[4809]: E0226 14:14:57.905205 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.005370 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.106222 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.117157 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.117423 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.118988 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.119056 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.119073 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:14:58 crc kubenswrapper[4809]: I0226 14:14:58.119654 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.119840 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.207298 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.307843 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.409219 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.510110 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.610374 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.710644 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.811538 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:58 crc kubenswrapper[4809]: E0226 14:14:58.911960 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.012867 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.112969 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.213458 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.313632 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.414392 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.515445 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.616406 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.717004 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.818116 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:14:59 crc kubenswrapper[4809]: E0226 14:14:59.919055 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.020105 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.120692 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.221898 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.323086 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.423470 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.523932 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.624562 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.725743 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.826811 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:00 crc kubenswrapper[4809]: E0226 14:15:00.927996 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.028138 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.129143 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.229725 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.330406 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.430836 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.531398 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.632418 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.733548 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.834333 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:01 crc kubenswrapper[4809]: E0226 14:15:01.934671 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.035597 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.136081 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.236745 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.337843 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.344218 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.438609 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.538781 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.639282 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.739488 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.840672 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:02 crc kubenswrapper[4809]: E0226 14:15:02.941388 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.042303 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.142806 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.243538 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.344039 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.445097 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.545827 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.646635 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.747112 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.848105 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:03 crc kubenswrapper[4809]: I0226 14:15:03.908869 4809 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 14:15:03 crc kubenswrapper[4809]: E0226 14:15:03.949006 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.049470 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.100538 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.100886 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.102895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.102946 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.102967 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:04 crc kubenswrapper[4809]: I0226 14:15:04.103881 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.104201 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.149797 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.249914 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.350373 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.451093 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.552116 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.652289 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.752916 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.854121 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:04 crc kubenswrapper[4809]: E0226 14:15:04.955090 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.055562 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.156089 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.257119 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.357476 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.458497 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.558929 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.659890 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.760677 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.861074 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:05 crc kubenswrapper[4809]: E0226 14:15:05.961293 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.061645 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.162315 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.263374 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.364471 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.465046 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.566112 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.666996 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.767822 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.868825 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:06 crc kubenswrapper[4809]: E0226 14:15:06.969474 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.069938 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.170394 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.270790 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.371245 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.471701 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.572200 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.673354 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.773749 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.782958 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.789402 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.789608 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.789748 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.789899 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.790058 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:07Z","lastTransitionTime":"2026-02-26T14:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.808319 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.815133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.815166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.815175 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.815189 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.815200 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:07Z","lastTransitionTime":"2026-02-26T14:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.831641 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.837873 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.837942 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.837961 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.837991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.838042 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:07Z","lastTransitionTime":"2026-02-26T14:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.856110 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.861296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.861362 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.861383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.861410 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:07 crc kubenswrapper[4809]: I0226 14:15:07.861427 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:07Z","lastTransitionTime":"2026-02-26T14:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.879488 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.879668 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.879739 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:07 crc kubenswrapper[4809]: E0226 14:15:07.980637 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.081524 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.182367 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.282903 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.383642 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.483754 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.584288 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.684956 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.785585 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.886416 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:08 crc kubenswrapper[4809]: E0226 14:15:08.987505 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.087866 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.188536 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.288693 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.389233 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.490340 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.590855 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.691075 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.792107 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.892237 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:09 crc kubenswrapper[4809]: E0226 14:15:09.993193 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.093695 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.194283 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.295251 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.395824 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.496508 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.597235 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.703650 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.804608 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:10 crc kubenswrapper[4809]: E0226 14:15:10.905669 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.005800 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.106753 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.207296 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.224887 4809 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.236097 4809 apiserver.go:52] "Watching apiserver" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.243046 4809 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.243326 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-hc768","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.243863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.243941 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.244075 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.244157 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.244350 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.244454 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.244503 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.244885 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.245101 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.245192 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.246956 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247274 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247380 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247519 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247531 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247699 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.247698 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.248326 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.248476 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.250243 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.250338 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.250549 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.276507 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.295057 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.305958 4809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.307964 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.309566 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.309626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.309648 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.309678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.309702 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.320815 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.332355 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.342841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.342937 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.342968 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343059 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343506 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343902 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343933 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343964 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343990 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.343636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344034 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344067 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344108 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344147 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344137 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344187 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344217 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344251 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344315 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344344 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344377 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344406 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344435 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344450 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344488 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344600 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344641 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344676 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344699 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344720 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344741 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344763 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344785 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344774 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344809 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344865 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344897 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344923 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.344988 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345032 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345061 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345082 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345100 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345103 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345116 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345118 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345222 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345251 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345281 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345312 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345313 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345340 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345478 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345593 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345888 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345848 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345800 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345973 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.346912 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.345344 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347205 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347232 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347276 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347294 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347313 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347351 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347383 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347402 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347420 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347438 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347456 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347478 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347502 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347517 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347531 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347781 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348394 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348480 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348937 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.349211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.348948 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.349698 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.349742 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.347530 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350140 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350146 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350183 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350489 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350485 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350616 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350657 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.350859 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351212 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351365 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351395 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351417 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351597 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351632 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.351804 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352318 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352527 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352538 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352588 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352641 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352650 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352678 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352716 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352915 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.352664 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353132 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353155 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353177 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353254 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353273 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353410 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353433 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353451 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353459 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353470 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353513 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353542 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353636 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353675 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353696 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353718 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353739 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353762 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.353456 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.353787 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:11.853759646 +0000 UTC m=+90.327080209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354186 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354204 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354246 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354274 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354421 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354450 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354493 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354586 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354614 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354633 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354676 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354762 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354764 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354847 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354100 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354934 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354976 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.354276 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355060 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355103 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355181 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355222 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355290 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355359 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355371 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355380 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355449 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355485 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355520 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355561 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355563 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355620 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355690 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355726 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355760 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355791 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355824 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355854 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355885 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355918 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355638 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355949 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355983 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356006 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356080 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356103 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356129 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355658 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356152 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355745 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355794 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356195 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356217 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356237 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356257 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356276 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356298 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356318 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356338 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356358 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356381 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356424 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356449 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356470 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356497 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356518 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356595 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356627 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356648 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356667 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356687 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356707 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356727 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356747 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356773 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356795 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356817 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356837 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356857 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356877 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356898 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356918 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356938 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356982 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357004 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357065 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357092 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357116 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357139 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357163 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357184 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357204 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357264 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357398 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357420 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357445 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357479 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357509 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357539 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357601 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357633 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357667 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357703 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357740 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357771 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357802 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357834 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357888 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357919 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357953 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357985 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358038 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358074 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358104 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358134 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358202 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358246 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358312 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358556 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358591 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358632 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358668 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358712 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358756 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358792 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3c91705d-1fab-4240-8e70-b3e01e220a8c-hosts-file\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358829 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kg4d\" (UniqueName: \"kubernetes.io/projected/3c91705d-1fab-4240-8e70-b3e01e220a8c-kube-api-access-9kg4d\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359062 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359087 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359107 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359125 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359145 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359164 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359181 4809 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359199 4809 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359217 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359234 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359252 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359270 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359287 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359304 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359321 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359338 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359354 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359370 4809 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359387 4809 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359404 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359421 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359439 4809 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359456 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359478 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359496 4809 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359519 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359539 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359557 4809 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359574 4809 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359591 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359609 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359627 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359646 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359664 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359681 4809 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359699 4809 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359719 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359736 4809 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359752 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355934 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359772 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359790 4809 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359808 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359823 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359839 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359857 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359877 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359896 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359913 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359932 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359873 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.360124 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.360572 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.360616 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.360631 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.360849 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.355987 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356127 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356324 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356444 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356570 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356743 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356765 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.356952 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357158 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357495 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361120 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357656 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357924 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.357998 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358143 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358154 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358202 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358395 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358505 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358572 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.358887 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359115 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359180 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359389 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359488 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359491 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359497 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359756 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361123 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.359948 4809 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361423 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361443 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361459 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361473 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361489 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361498 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361504 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361564 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361591 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361608 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361627 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361640 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361654 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361665 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361677 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361689 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361700 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361787 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361802 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361815 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361827 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361839 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361878 4809 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361561 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361772 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.361765 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.362287 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.362439 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.362476 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.362592 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.362691 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:11.862662085 +0000 UTC m=+90.335982758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363256 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363278 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363361 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363512 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363804 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.363858 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364213 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364149 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364458 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364502 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364525 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.364702 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365115 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365154 4809 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365452 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.365573 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365600 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365611 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.365732 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:11.86569941 +0000 UTC m=+90.339020033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.366074 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.365939 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.366802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.367251 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.367380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.367514 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.368212 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.368401 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.368638 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.368631 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.368929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.369119 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.369408 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.369418 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.369471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.370043 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.370491 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.370861 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.371049 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.371403 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.371447 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.371722 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.376370 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.376401 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.376419 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.376503 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:11.876478121 +0000 UTC m=+90.349798834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.378418 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.378697 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.378755 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.378830 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.378927 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.378944 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.378959 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.379077 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:11.879057824 +0000 UTC m=+90.352378387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.379179 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.379393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.379407 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.379599 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.379937 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.380240 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.380556 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.380631 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.381208 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.381787 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.383404 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.383835 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.384211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.384911 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.387319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.387472 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.387788 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.388279 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.388633 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.388959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.389591 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.389827 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.391501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.391767 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.392199 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.392225 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.392327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.392866 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.392636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.396892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.398005 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.398147 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.398149 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.398267 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.398969 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.399587 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.402865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.405658 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.413081 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.413148 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.413182 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.413207 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.413223 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.416683 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.418894 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.420648 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464548 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3c91705d-1fab-4240-8e70-b3e01e220a8c-hosts-file\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464603 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kg4d\" (UniqueName: \"kubernetes.io/projected/3c91705d-1fab-4240-8e70-b3e01e220a8c-kube-api-access-9kg4d\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464651 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464731 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464868 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464891 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464907 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464923 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464940 4809 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464956 4809 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464970 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.464986 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465001 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465041 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465058 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465073 4809 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465088 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465103 4809 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465119 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465134 4809 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465150 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465167 4809 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465183 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465202 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465218 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465234 4809 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465249 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465265 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465280 4809 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465296 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465312 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465328 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465343 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465358 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465376 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465391 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465405 4809 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465419 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465434 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465452 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465467 4809 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465482 4809 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465497 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465512 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465526 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465542 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465559 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465576 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465591 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465605 4809 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465620 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465638 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465655 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465670 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465684 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465700 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465716 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465731 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465747 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465762 4809 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465782 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465833 4809 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465849 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465863 4809 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465877 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465893 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.465908 4809 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466134 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466150 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466166 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466181 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466196 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466211 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466227 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466243 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466260 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466276 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466312 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466328 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466343 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466359 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466374 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466388 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466402 4809 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466417 4809 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466431 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466446 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466462 4809 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466476 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466491 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466505 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466521 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466536 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466553 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466569 4809 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466584 4809 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466602 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466617 4809 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466828 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466844 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466859 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466874 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466888 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466903 4809 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466920 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466934 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466949 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466965 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466979 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.466994 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467007 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467074 4809 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467091 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467105 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467218 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-72xsh"] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467317 4809 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467376 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467400 4809 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467416 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467434 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467451 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467466 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467482 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467500 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467517 4809 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467533 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467547 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467373 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3c91705d-1fab-4240-8e70-b3e01e220a8c-hosts-file\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467608 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.467831 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-q47rn"] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.468772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.468872 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ccvqm"] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.469250 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.470307 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474050 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474158 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474256 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474297 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474326 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474327 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.474460 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.475447 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.475465 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.475609 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.475764 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.475765 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.484405 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.491424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kg4d\" (UniqueName: \"kubernetes.io/projected/3c91705d-1fab-4240-8e70-b3e01e220a8c-kube-api-access-9kg4d\") pod \"node-resolver-hc768\" (UID: \"3c91705d-1fab-4240-8e70-b3e01e220a8c\") " pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.493989 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.505147 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.515991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.516285 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.516391 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.516351 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.517626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.517869 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.529303 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.539192 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.551612 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.560857 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.568816 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee5dfae-6391-4988-900c-e8abcb031d30-mcd-auth-proxy-config\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-netns\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569103 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-multus-certs\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569150 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569412 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-system-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569435 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-hostroot\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-daemon-config\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ee5dfae-6391-4988-900c-e8abcb031d30-proxy-tls\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569500 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-multus\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569516 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-kubelet\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569554 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-etc-kubernetes\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cni-binary-copy\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569682 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-conf-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569703 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569736 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-kube-api-access-pjr6v\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569774 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hgg\" (UniqueName: \"kubernetes.io/projected/2ee5dfae-6391-4988-900c-e8abcb031d30-kube-api-access-q5hgg\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569794 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cnibin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-k8s-cni-cncf-io\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569917 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-cnibin\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-socket-dir-parent\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.569999 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ee5dfae-6391-4988-900c-e8abcb031d30-rootfs\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570056 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-system-cni-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570121 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-os-release\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-binary-copy\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-bin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570291 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqfnk\" (UniqueName: \"kubernetes.io/projected/021874d0-ff73-40e4-97aa-2f72d648e289-kube-api-access-rqfnk\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.570317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-os-release\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.571129 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.580558 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: W0226 14:15:11.581311 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8eb0af48b44c8982635e831a517c700bcc0a2acacd91f92a21e1f089efd84f60 WatchSource:0}: Error finding container 8eb0af48b44c8982635e831a517c700bcc0a2acacd91f92a21e1f089efd84f60: Status 404 returned error can't find the container with id 8eb0af48b44c8982635e831a517c700bcc0a2acacd91f92a21e1f089efd84f60 Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.583392 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 14:15:11 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: source /etc/kubernetes/apiserver-url.env Feb 26 14:15:11 crc kubenswrapper[4809]: else Feb 26 14:15:11 crc kubenswrapper[4809]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 14:15:11 crc kubenswrapper[4809]: exit 1 Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 14:15:11 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.583740 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.584640 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.590031 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.598165 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ -f "/env/_master" ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: source "/env/_master" Feb 26 14:15:11 crc kubenswrapper[4809]: set +o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 14:15:11 crc kubenswrapper[4809]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 14:15:11 crc kubenswrapper[4809]: ho_enable="--enable-hybrid-overlay" Feb 26 14:15:11 crc kubenswrapper[4809]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 14:15:11 crc kubenswrapper[4809]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 14:15:11 crc kubenswrapper[4809]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 14:15:11 crc kubenswrapper[4809]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 14:15:11 crc kubenswrapper[4809]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 14:15:11 crc kubenswrapper[4809]: --webhook-host=127.0.0.1 \ Feb 26 14:15:11 crc kubenswrapper[4809]: --webhook-port=9743 \ Feb 26 14:15:11 crc kubenswrapper[4809]: ${ho_enable} \ Feb 26 14:15:11 crc kubenswrapper[4809]: --enable-interconnect \ Feb 26 14:15:11 crc kubenswrapper[4809]: --disable-approver \ Feb 26 14:15:11 crc kubenswrapper[4809]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 14:15:11 crc kubenswrapper[4809]: --wait-for-kubernetes-api=200s \ Feb 26 14:15:11 crc kubenswrapper[4809]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 14:15:11 crc kubenswrapper[4809]: --loglevel="${LOGLEVEL}" Feb 26 14:15:11 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.598243 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.599407 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.599496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8eb0af48b44c8982635e831a517c700bcc0a2acacd91f92a21e1f089efd84f60"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.600697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c0582c16689da33f63a3b9385500e4d391746d9f99dbafddc6c8bb858df063eb"} Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.601169 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ -f "/env/_master" ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: source "/env/_master" Feb 26 14:15:11 crc kubenswrapper[4809]: set +o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 14:15:11 crc kubenswrapper[4809]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 14:15:11 crc kubenswrapper[4809]: --disable-webhook \ Feb 26 14:15:11 crc kubenswrapper[4809]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 14:15:11 crc kubenswrapper[4809]: --loglevel="${LOGLEVEL}" Feb 26 14:15:11 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.602934 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.603181 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 14:15:11 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: source /etc/kubernetes/apiserver-url.env Feb 26 14:15:11 crc kubenswrapper[4809]: else Feb 26 14:15:11 crc kubenswrapper[4809]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 14:15:11 crc kubenswrapper[4809]: exit 1 Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 14:15:11 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.603574 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hc768" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.604424 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.608498 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: W0226 14:15:11.610734 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8523b742775a2e24c4973c8d968ab27bf82ef5f01847023f8acfe9ab6a27f22e WatchSource:0}: Error finding container 8523b742775a2e24c4973c8d968ab27bf82ef5f01847023f8acfe9ab6a27f22e: Status 404 returned error can't find the container with id 8523b742775a2e24c4973c8d968ab27bf82ef5f01847023f8acfe9ab6a27f22e Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.613976 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: W0226 14:15:11.614886 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c91705d_1fab_4240_8e70_b3e01e220a8c.slice/crio-aa22df68a5c598631b5040b0eb61ff28ab7cfc08e753efe281e590f8a2b87f01 WatchSource:0}: Error finding container aa22df68a5c598631b5040b0eb61ff28ab7cfc08e753efe281e590f8a2b87f01: Status 404 returned error can't find the container with id aa22df68a5c598631b5040b0eb61ff28ab7cfc08e753efe281e590f8a2b87f01 Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.615574 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.617049 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.617830 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 26 14:15:11 crc kubenswrapper[4809]: set -uo pipefail Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 26 14:15:11 crc kubenswrapper[4809]: HOSTS_FILE="/etc/hosts" Feb 26 14:15:11 crc kubenswrapper[4809]: TEMP_FILE="/etc/hosts.tmp" Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: # Make a temporary file with the old hosts file's attributes. Feb 26 14:15:11 crc kubenswrapper[4809]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 26 14:15:11 crc kubenswrapper[4809]: echo "Failed to preserve hosts file. Exiting." Feb 26 14:15:11 crc kubenswrapper[4809]: exit 1 Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: while true; do Feb 26 14:15:11 crc kubenswrapper[4809]: declare -A svc_ips Feb 26 14:15:11 crc kubenswrapper[4809]: for svc in "${services[@]}"; do Feb 26 14:15:11 crc kubenswrapper[4809]: # Fetch service IP from cluster dns if present. We make several tries Feb 26 14:15:11 crc kubenswrapper[4809]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 26 14:15:11 crc kubenswrapper[4809]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 26 14:15:11 crc kubenswrapper[4809]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 26 14:15:11 crc kubenswrapper[4809]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:11 crc kubenswrapper[4809]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:11 crc kubenswrapper[4809]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:11 crc kubenswrapper[4809]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 26 14:15:11 crc kubenswrapper[4809]: for i in ${!cmds[*]} Feb 26 14:15:11 crc kubenswrapper[4809]: do Feb 26 14:15:11 crc kubenswrapper[4809]: ips=($(eval "${cmds[i]}")) Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: svc_ips["${svc}"]="${ips[@]}" Feb 26 14:15:11 crc kubenswrapper[4809]: break Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: done Feb 26 14:15:11 crc kubenswrapper[4809]: done Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: # Update /etc/hosts only if we get valid service IPs Feb 26 14:15:11 crc kubenswrapper[4809]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 26 14:15:11 crc kubenswrapper[4809]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 26 14:15:11 crc kubenswrapper[4809]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 26 14:15:11 crc kubenswrapper[4809]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 26 14:15:11 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:11 crc kubenswrapper[4809]: continue Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: # Append resolver entries for services Feb 26 14:15:11 crc kubenswrapper[4809]: rc=0 Feb 26 14:15:11 crc kubenswrapper[4809]: for svc in "${!svc_ips[@]}"; do Feb 26 14:15:11 crc kubenswrapper[4809]: for ip in ${svc_ips[${svc}]}; do Feb 26 14:15:11 crc kubenswrapper[4809]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 26 14:15:11 crc kubenswrapper[4809]: done Feb 26 14:15:11 crc kubenswrapper[4809]: done Feb 26 14:15:11 crc kubenswrapper[4809]: if [[ $rc -ne 0 ]]; then Feb 26 14:15:11 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:11 crc kubenswrapper[4809]: continue Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: Feb 26 14:15:11 crc kubenswrapper[4809]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 26 14:15:11 crc kubenswrapper[4809]: # Replace /etc/hosts with our modified version if needed Feb 26 14:15:11 crc kubenswrapper[4809]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 26 14:15:11 crc kubenswrapper[4809]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 26 14:15:11 crc kubenswrapper[4809]: fi Feb 26 14:15:11 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:11 crc kubenswrapper[4809]: unset svc_ips Feb 26 14:15:11 crc kubenswrapper[4809]: done Feb 26 14:15:11 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kg4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hc768_openshift-dns(3c91705d-1fab-4240-8e70-b3e01e220a8c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.619142 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hc768" podUID="3c91705d-1fab-4240-8e70-b3e01e220a8c" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.620383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.620424 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.620440 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.620461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.620474 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.629704 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.640386 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.651091 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.659856 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.667925 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-bin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670782 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqfnk\" (UniqueName: \"kubernetes.io/projected/021874d0-ff73-40e4-97aa-2f72d648e289-kube-api-access-rqfnk\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670799 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-os-release\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee5dfae-6391-4988-900c-e8abcb031d30-mcd-auth-proxy-config\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670846 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-netns\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-multus-certs\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670876 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670893 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670917 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-system-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670936 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-hostroot\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670952 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-daemon-config\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ee5dfae-6391-4988-900c-e8abcb031d30-proxy-tls\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.670982 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-multus\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671001 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-kubelet\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671044 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-etc-kubernetes\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671060 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cni-binary-copy\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-conf-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671095 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671124 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-kube-api-access-pjr6v\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671142 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hgg\" (UniqueName: \"kubernetes.io/projected/2ee5dfae-6391-4988-900c-e8abcb031d30-kube-api-access-q5hgg\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671190 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-netns\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cnibin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671221 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-system-cni-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-os-release\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671216 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cnibin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671282 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-multus-certs\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671278 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-etc-kubernetes\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671332 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-k8s-cni-cncf-io\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-cnibin\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671411 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-cnibin\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671420 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-socket-dir-parent\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671453 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-kubelet\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671460 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-socket-dir-parent\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671379 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-bin\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671488 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ee5dfae-6391-4988-900c-e8abcb031d30-rootfs\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671533 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-conf-dir\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671534 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-system-cni-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671558 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-system-cni-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671507 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ee5dfae-6391-4988-900c-e8abcb031d30-rootfs\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671637 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-run-k8s-cni-cncf-io\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671773 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee5dfae-6391-4988-900c-e8abcb031d30-mcd-auth-proxy-config\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671857 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-os-release\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671932 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-os-release\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671936 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-hostroot\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.671966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-host-var-lib-cni-multus\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.672005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-binary-copy\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.672252 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.672281 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-multus-daemon-config\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.672416 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-cni-binary-copy\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.672709 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/021874d0-ff73-40e4-97aa-2f72d648e289-cni-binary-copy\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.673274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/021874d0-ff73-40e4-97aa-2f72d648e289-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.676484 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ee5dfae-6391-4988-900c-e8abcb031d30-proxy-tls\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.678882 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.689069 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.692273 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/9bca1e32-8331-4d7d-acf3-7ee31374c8bd-kube-api-access-pjr6v\") pod \"multus-ccvqm\" (UID: \"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\") " pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.697044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqfnk\" (UniqueName: \"kubernetes.io/projected/021874d0-ff73-40e4-97aa-2f72d648e289-kube-api-access-rqfnk\") pod \"multus-additional-cni-plugins-q47rn\" (UID: \"021874d0-ff73-40e4-97aa-2f72d648e289\") " pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.699170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.699246 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hgg\" (UniqueName: \"kubernetes.io/projected/2ee5dfae-6391-4988-900c-e8abcb031d30-kube-api-access-q5hgg\") pod \"machine-config-daemon-72xsh\" (UID: \"2ee5dfae-6391-4988-900c-e8abcb031d30\") " pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.711252 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723175 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723232 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723200 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723273 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.723284 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.732938 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.741578 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.749597 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.758545 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.792933 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.801950 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q47rn" Feb 26 14:15:11 crc kubenswrapper[4809]: W0226 14:15:11.806365 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee5dfae_6391_4988_900c_e8abcb031d30.slice/crio-659e5d6aafb273bd0a4ac948b2ef4dd9362ad13be23cfa12b169dca958be7753 WatchSource:0}: Error finding container 659e5d6aafb273bd0a4ac948b2ef4dd9362ad13be23cfa12b169dca958be7753: Status 404 returned error can't find the container with id 659e5d6aafb273bd0a4ac948b2ef4dd9362ad13be23cfa12b169dca958be7753 Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.808323 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ccvqm" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.810956 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5hgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.817864 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5hgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.818236 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqfnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-q47rn_openshift-multus(021874d0-ff73-40e4-97aa-2f72d648e289): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.819484 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-q47rn" podUID="021874d0-ff73-40e4-97aa-2f72d648e289" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.819602 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.820512 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qwqmq"] Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.821527 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.823501 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 14:15:11 crc kubenswrapper[4809]: W0226 14:15:11.824533 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bca1e32_8331_4d7d_acf3_7ee31374c8bd.slice/crio-4ef05c33dc9e89a40b566cdd9a5598468b1a416b686fdedcd671eee6a1b74e71 WatchSource:0}: Error finding container 4ef05c33dc9e89a40b566cdd9a5598468b1a416b686fdedcd671eee6a1b74e71: Status 404 returned error can't find the container with id 4ef05c33dc9e89a40b566cdd9a5598468b1a416b686fdedcd671eee6a1b74e71 Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825276 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825345 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825418 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825440 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825466 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825477 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825483 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825508 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825865 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.825899 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.827627 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:11 crc kubenswrapper[4809]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 26 14:15:11 crc kubenswrapper[4809]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 26 14:15:11 crc kubenswrapper[4809]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-ccvqm_openshift-multus(9bca1e32-8331-4d7d-acf3-7ee31374c8bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:11 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.828930 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-ccvqm" podUID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.837903 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.847915 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.857127 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.864830 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.871968 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.874124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.874379 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:12.874340811 +0000 UTC m=+91.347661344 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.874704 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.874831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.874869 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:12.874755532 +0000 UTC m=+91.348076055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.875265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.876752 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.876913 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:12.876899352 +0000 UTC m=+91.350219875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.882055 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.892729 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.900792 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.909798 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.927551 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.927589 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.927601 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.927618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.927630 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:11Z","lastTransitionTime":"2026-02-26T14:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.949521 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.977908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.977965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swptd\" (UniqueName: \"kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978328 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978382 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978443 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978485 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978519 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978586 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978653 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978714 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978755 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978797 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978836 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978914 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978959 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: I0226 14:15:11.978988 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979199 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979233 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979253 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979315 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:12.979294897 +0000 UTC m=+91.452615460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979404 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979481 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979538 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:11 crc kubenswrapper[4809]: E0226 14:15:11.979648 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:12.979629816 +0000 UTC m=+91.452950389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.004559 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.030782 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.030828 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.030840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.030855 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.030866 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.079890 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swptd\" (UniqueName: \"kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.079944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.079982 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080004 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080046 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080063 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080116 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080117 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080165 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080209 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080213 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080134 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080166 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080592 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080632 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080670 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080765 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080778 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080806 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080846 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080858 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080889 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080907 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080929 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080950 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.080993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081138 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081142 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081468 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081538 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.081583 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.086008 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.100315 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swptd\" (UniqueName: \"kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd\") pod \"ovnkube-node-qwqmq\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133390 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133540 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133631 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133660 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.133679 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.236513 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.236555 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.236567 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.236584 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.236596 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.256607 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.256728 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.260764 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.261576 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.262747 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.263477 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.264614 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.265130 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.265726 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.266654 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.267281 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.268379 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.268846 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.270033 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.270667 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.271311 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.272323 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.272877 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.274034 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.274510 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.275236 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.276387 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.276893 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.277955 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.278453 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.279677 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.280162 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.280861 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.282090 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.282651 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.283676 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.284154 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.285003 4809 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.285139 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.286853 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.287777 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.288221 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.289727 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.290375 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.291232 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.291844 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.292940 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.293452 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.294397 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.294964 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.295993 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.296500 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.297375 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.297892 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.299041 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.299542 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.300351 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.300790 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.301721 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.302331 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.302772 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.338680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.338716 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.338724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.338738 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.338747 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.440964 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.441001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.441030 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.441058 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.441070 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.543769 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.543810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.543818 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.543834 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.543845 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.593172 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: W0226 14:15:12.598850 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eaaa554_c5bb_455b_ad10_96f71caf4e26.slice/crio-36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca WatchSource:0}: Error finding container 36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca: Status 404 returned error can't find the container with id 36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.600830 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 26 14:15:12 crc kubenswrapper[4809]: apiVersion: v1 Feb 26 14:15:12 crc kubenswrapper[4809]: clusters: Feb 26 14:15:12 crc kubenswrapper[4809]: - cluster: Feb 26 14:15:12 crc kubenswrapper[4809]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 26 14:15:12 crc kubenswrapper[4809]: server: https://api-int.crc.testing:6443 Feb 26 14:15:12 crc kubenswrapper[4809]: name: default-cluster Feb 26 14:15:12 crc kubenswrapper[4809]: contexts: Feb 26 14:15:12 crc kubenswrapper[4809]: - context: Feb 26 14:15:12 crc kubenswrapper[4809]: cluster: default-cluster Feb 26 14:15:12 crc kubenswrapper[4809]: namespace: default Feb 26 14:15:12 crc kubenswrapper[4809]: user: default-auth Feb 26 14:15:12 crc kubenswrapper[4809]: name: default-context Feb 26 14:15:12 crc kubenswrapper[4809]: current-context: default-context Feb 26 14:15:12 crc kubenswrapper[4809]: kind: Config Feb 26 14:15:12 crc kubenswrapper[4809]: preferences: {} Feb 26 14:15:12 crc kubenswrapper[4809]: users: Feb 26 14:15:12 crc kubenswrapper[4809]: - name: default-auth Feb 26 14:15:12 crc kubenswrapper[4809]: user: Feb 26 14:15:12 crc kubenswrapper[4809]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 14:15:12 crc kubenswrapper[4809]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 14:15:12 crc kubenswrapper[4809]: EOF Feb 26 14:15:12 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swptd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.602120 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.604190 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hc768" event={"ID":"3c91705d-1fab-4240-8e70-b3e01e220a8c","Type":"ContainerStarted","Data":"aa22df68a5c598631b5040b0eb61ff28ab7cfc08e753efe281e590f8a2b87f01"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.605160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca"} Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.606590 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 26 14:15:12 crc kubenswrapper[4809]: apiVersion: v1 Feb 26 14:15:12 crc kubenswrapper[4809]: clusters: Feb 26 14:15:12 crc kubenswrapper[4809]: - cluster: Feb 26 14:15:12 crc kubenswrapper[4809]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 26 14:15:12 crc kubenswrapper[4809]: server: https://api-int.crc.testing:6443 Feb 26 14:15:12 crc kubenswrapper[4809]: name: default-cluster Feb 26 14:15:12 crc kubenswrapper[4809]: contexts: Feb 26 14:15:12 crc kubenswrapper[4809]: - context: Feb 26 14:15:12 crc kubenswrapper[4809]: cluster: default-cluster Feb 26 14:15:12 crc kubenswrapper[4809]: namespace: default Feb 26 14:15:12 crc kubenswrapper[4809]: user: default-auth Feb 26 14:15:12 crc kubenswrapper[4809]: name: default-context Feb 26 14:15:12 crc kubenswrapper[4809]: current-context: default-context Feb 26 14:15:12 crc kubenswrapper[4809]: kind: Config Feb 26 14:15:12 crc kubenswrapper[4809]: preferences: {} Feb 26 14:15:12 crc kubenswrapper[4809]: users: Feb 26 14:15:12 crc kubenswrapper[4809]: - name: default-auth Feb 26 14:15:12 crc kubenswrapper[4809]: user: Feb 26 14:15:12 crc kubenswrapper[4809]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 14:15:12 crc kubenswrapper[4809]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 26 14:15:12 crc kubenswrapper[4809]: EOF Feb 26 14:15:12 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swptd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.606630 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8523b742775a2e24c4973c8d968ab27bf82ef5f01847023f8acfe9ab6a27f22e"} Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.606678 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 26 14:15:12 crc kubenswrapper[4809]: set -uo pipefail Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 26 14:15:12 crc kubenswrapper[4809]: HOSTS_FILE="/etc/hosts" Feb 26 14:15:12 crc kubenswrapper[4809]: TEMP_FILE="/etc/hosts.tmp" Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: # Make a temporary file with the old hosts file's attributes. Feb 26 14:15:12 crc kubenswrapper[4809]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 26 14:15:12 crc kubenswrapper[4809]: echo "Failed to preserve hosts file. Exiting." Feb 26 14:15:12 crc kubenswrapper[4809]: exit 1 Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: while true; do Feb 26 14:15:12 crc kubenswrapper[4809]: declare -A svc_ips Feb 26 14:15:12 crc kubenswrapper[4809]: for svc in "${services[@]}"; do Feb 26 14:15:12 crc kubenswrapper[4809]: # Fetch service IP from cluster dns if present. We make several tries Feb 26 14:15:12 crc kubenswrapper[4809]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 26 14:15:12 crc kubenswrapper[4809]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 26 14:15:12 crc kubenswrapper[4809]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 26 14:15:12 crc kubenswrapper[4809]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:12 crc kubenswrapper[4809]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:12 crc kubenswrapper[4809]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 26 14:15:12 crc kubenswrapper[4809]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 26 14:15:12 crc kubenswrapper[4809]: for i in ${!cmds[*]} Feb 26 14:15:12 crc kubenswrapper[4809]: do Feb 26 14:15:12 crc kubenswrapper[4809]: ips=($(eval "${cmds[i]}")) Feb 26 14:15:12 crc kubenswrapper[4809]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 26 14:15:12 crc kubenswrapper[4809]: svc_ips["${svc}"]="${ips[@]}" Feb 26 14:15:12 crc kubenswrapper[4809]: break Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: done Feb 26 14:15:12 crc kubenswrapper[4809]: done Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: # Update /etc/hosts only if we get valid service IPs Feb 26 14:15:12 crc kubenswrapper[4809]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 26 14:15:12 crc kubenswrapper[4809]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 26 14:15:12 crc kubenswrapper[4809]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 26 14:15:12 crc kubenswrapper[4809]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 26 14:15:12 crc kubenswrapper[4809]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 26 14:15:12 crc kubenswrapper[4809]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 26 14:15:12 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:12 crc kubenswrapper[4809]: continue Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: # Append resolver entries for services Feb 26 14:15:12 crc kubenswrapper[4809]: rc=0 Feb 26 14:15:12 crc kubenswrapper[4809]: for svc in "${!svc_ips[@]}"; do Feb 26 14:15:12 crc kubenswrapper[4809]: for ip in ${svc_ips[${svc}]}; do Feb 26 14:15:12 crc kubenswrapper[4809]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 26 14:15:12 crc kubenswrapper[4809]: done Feb 26 14:15:12 crc kubenswrapper[4809]: done Feb 26 14:15:12 crc kubenswrapper[4809]: if [[ $rc -ne 0 ]]; then Feb 26 14:15:12 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:12 crc kubenswrapper[4809]: continue Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 26 14:15:12 crc kubenswrapper[4809]: # Replace /etc/hosts with our modified version if needed Feb 26 14:15:12 crc kubenswrapper[4809]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 26 14:15:12 crc kubenswrapper[4809]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: sleep 60 & wait Feb 26 14:15:12 crc kubenswrapper[4809]: unset svc_ips Feb 26 14:15:12 crc kubenswrapper[4809]: done Feb 26 14:15:12 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kg4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hc768_openshift-dns(3c91705d-1fab-4240-8e70-b3e01e220a8c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.607625 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.607727 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hc768" podUID="3c91705d-1fab-4240-8e70-b3e01e220a8c" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.607724 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.608031 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerStarted","Data":"4ef05c33dc9e89a40b566cdd9a5598468b1a416b686fdedcd671eee6a1b74e71"} Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.608720 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.609097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerStarted","Data":"a5258e4a7671a414023b96f349eed6d16cef171e562280d81cd5818af70c8bc6"} Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.609251 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 26 14:15:12 crc kubenswrapper[4809]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 26 14:15:12 crc kubenswrapper[4809]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-ccvqm_openshift-multus(9bca1e32-8331-4d7d-acf3-7ee31374c8bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.609827 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.610103 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqfnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-q47rn_openshift-multus(021874d0-ff73-40e4-97aa-2f72d648e289): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.610383 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-ccvqm" podUID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.610734 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"659e5d6aafb273bd0a4ac948b2ef4dd9362ad13be23cfa12b169dca958be7753"} Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.611165 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-q47rn" podUID="021874d0-ff73-40e4-97aa-2f72d648e289" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.612077 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5hgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.612239 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 14:15:12 crc kubenswrapper[4809]: if [[ -f "/env/_master" ]]; then Feb 26 14:15:12 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:12 crc kubenswrapper[4809]: source "/env/_master" Feb 26 14:15:12 crc kubenswrapper[4809]: set +o allexport Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 14:15:12 crc kubenswrapper[4809]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 14:15:12 crc kubenswrapper[4809]: ho_enable="--enable-hybrid-overlay" Feb 26 14:15:12 crc kubenswrapper[4809]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 14:15:12 crc kubenswrapper[4809]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 14:15:12 crc kubenswrapper[4809]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 14:15:12 crc kubenswrapper[4809]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 14:15:12 crc kubenswrapper[4809]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 14:15:12 crc kubenswrapper[4809]: --webhook-host=127.0.0.1 \ Feb 26 14:15:12 crc kubenswrapper[4809]: --webhook-port=9743 \ Feb 26 14:15:12 crc kubenswrapper[4809]: ${ho_enable} \ Feb 26 14:15:12 crc kubenswrapper[4809]: --enable-interconnect \ Feb 26 14:15:12 crc kubenswrapper[4809]: --disable-approver \ Feb 26 14:15:12 crc kubenswrapper[4809]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 14:15:12 crc kubenswrapper[4809]: --wait-for-kubernetes-api=200s \ Feb 26 14:15:12 crc kubenswrapper[4809]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 14:15:12 crc kubenswrapper[4809]: --loglevel="${LOGLEVEL}" Feb 26 14:15:12 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.614328 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5hgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.614838 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:12 crc kubenswrapper[4809]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 14:15:12 crc kubenswrapper[4809]: if [[ -f "/env/_master" ]]; then Feb 26 14:15:12 crc kubenswrapper[4809]: set -o allexport Feb 26 14:15:12 crc kubenswrapper[4809]: source "/env/_master" Feb 26 14:15:12 crc kubenswrapper[4809]: set +o allexport Feb 26 14:15:12 crc kubenswrapper[4809]: fi Feb 26 14:15:12 crc kubenswrapper[4809]: Feb 26 14:15:12 crc kubenswrapper[4809]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 14:15:12 crc kubenswrapper[4809]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 14:15:12 crc kubenswrapper[4809]: --disable-webhook \ Feb 26 14:15:12 crc kubenswrapper[4809]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 14:15:12 crc kubenswrapper[4809]: --loglevel="${LOGLEVEL}" Feb 26 14:15:12 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:12 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.615747 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.617153 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.627325 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.641199 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.647495 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.647569 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.647589 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.647606 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.647615 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.655995 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.663453 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.670821 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.671844 4809 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.680917 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.690425 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.701261 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.711840 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.720847 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.735579 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.746420 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.749639 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.749674 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.749687 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.749710 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.749721 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.755796 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.765274 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.775298 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.785228 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.792611 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.828774 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.851572 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.851613 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.851625 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.851641 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.851651 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.873716 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.890603 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.890771 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.890844 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:14.890809299 +0000 UTC m=+93.364129822 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.890896 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.890916 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.890973 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:14.890950013 +0000 UTC m=+93.364270566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.891173 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.891307 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:14.891277223 +0000 UTC m=+93.364597776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.909599 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.954597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.954654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.954667 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.954954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.954991 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:12Z","lastTransitionTime":"2026-02-26T14:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.992323 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:12 crc kubenswrapper[4809]: I0226 14:15:12.992369 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992535 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992558 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992575 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992604 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992677 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992704 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992629 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:14.992612517 +0000 UTC m=+93.465933060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:12 crc kubenswrapper[4809]: E0226 14:15:12.992830 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:14.992798262 +0000 UTC m=+93.466118825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.057543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.057619 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.057641 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.057666 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.057684 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.160382 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.160436 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.160452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.160476 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.160495 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.227347 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pkjv8"] Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.227748 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.229933 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.230896 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.232556 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.232699 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.242599 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.253134 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.255656 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.255720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.255847 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.256110 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.262713 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.262786 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.262799 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.262817 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.262831 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.272504 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.285393 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.298746 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.315262 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.324218 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.336207 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.348231 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.365479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.365529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.365541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.365561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.365575 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.392857 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.396498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lbm\" (UniqueName: \"kubernetes.io/projected/628aecc0-f33d-45bc-a351-897a05a70dff-kube-api-access-m6lbm\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.396565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/628aecc0-f33d-45bc-a351-897a05a70dff-serviceca\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.396614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/628aecc0-f33d-45bc-a351-897a05a70dff-host\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.434749 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.467987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.468089 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.468114 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.468146 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.468186 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.472167 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.498167 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/628aecc0-f33d-45bc-a351-897a05a70dff-serviceca\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.498240 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/628aecc0-f33d-45bc-a351-897a05a70dff-host\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.498316 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lbm\" (UniqueName: \"kubernetes.io/projected/628aecc0-f33d-45bc-a351-897a05a70dff-kube-api-access-m6lbm\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.498369 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/628aecc0-f33d-45bc-a351-897a05a70dff-host\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.499918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/628aecc0-f33d-45bc-a351-897a05a70dff-serviceca\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.528631 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lbm\" (UniqueName: \"kubernetes.io/projected/628aecc0-f33d-45bc-a351-897a05a70dff-kube-api-access-m6lbm\") pod \"node-ca-pkjv8\" (UID: \"628aecc0-f33d-45bc-a351-897a05a70dff\") " pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.540410 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pkjv8" Feb 26 14:15:13 crc kubenswrapper[4809]: W0226 14:15:13.552394 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod628aecc0_f33d_45bc_a351_897a05a70dff.slice/crio-ff9c7c5e750e972aa2ef49f3e6e184bda2fa4804d34058ca57a4ee928633baea WatchSource:0}: Error finding container ff9c7c5e750e972aa2ef49f3e6e184bda2fa4804d34058ca57a4ee928633baea: Status 404 returned error can't find the container with id ff9c7c5e750e972aa2ef49f3e6e184bda2fa4804d34058ca57a4ee928633baea Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.555059 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:13 crc kubenswrapper[4809]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 26 14:15:13 crc kubenswrapper[4809]: while [ true ]; Feb 26 14:15:13 crc kubenswrapper[4809]: do Feb 26 14:15:13 crc kubenswrapper[4809]: for f in $(ls /tmp/serviceca); do Feb 26 14:15:13 crc kubenswrapper[4809]: echo $f Feb 26 14:15:13 crc kubenswrapper[4809]: ca_file_path="/tmp/serviceca/${f}" Feb 26 14:15:13 crc kubenswrapper[4809]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 26 14:15:13 crc kubenswrapper[4809]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 26 14:15:13 crc kubenswrapper[4809]: if [ -e "${reg_dir_path}" ]; then Feb 26 14:15:13 crc kubenswrapper[4809]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 26 14:15:13 crc kubenswrapper[4809]: else Feb 26 14:15:13 crc kubenswrapper[4809]: mkdir $reg_dir_path Feb 26 14:15:13 crc kubenswrapper[4809]: cp $ca_file_path $reg_dir_path/ca.crt Feb 26 14:15:13 crc kubenswrapper[4809]: fi Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: for d in $(ls /etc/docker/certs.d); do Feb 26 14:15:13 crc kubenswrapper[4809]: echo $d Feb 26 14:15:13 crc kubenswrapper[4809]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 26 14:15:13 crc kubenswrapper[4809]: reg_conf_path="/tmp/serviceca/${dp}" Feb 26 14:15:13 crc kubenswrapper[4809]: if [ ! -e "${reg_conf_path}" ]; then Feb 26 14:15:13 crc kubenswrapper[4809]: rm -rf /etc/docker/certs.d/$d Feb 26 14:15:13 crc kubenswrapper[4809]: fi Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: sleep 60 & wait ${!} Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6lbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-pkjv8_openshift-image-registry(628aecc0-f33d-45bc-a351-897a05a70dff): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:13 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.557072 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-pkjv8" podUID="628aecc0-f33d-45bc-a351-897a05a70dff" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.572191 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.572231 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.572245 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.572269 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.572279 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.614805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pkjv8" event={"ID":"628aecc0-f33d-45bc-a351-897a05a70dff","Type":"ContainerStarted","Data":"ff9c7c5e750e972aa2ef49f3e6e184bda2fa4804d34058ca57a4ee928633baea"} Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.616203 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:15:13 crc kubenswrapper[4809]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 26 14:15:13 crc kubenswrapper[4809]: while [ true ]; Feb 26 14:15:13 crc kubenswrapper[4809]: do Feb 26 14:15:13 crc kubenswrapper[4809]: for f in $(ls /tmp/serviceca); do Feb 26 14:15:13 crc kubenswrapper[4809]: echo $f Feb 26 14:15:13 crc kubenswrapper[4809]: ca_file_path="/tmp/serviceca/${f}" Feb 26 14:15:13 crc kubenswrapper[4809]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 26 14:15:13 crc kubenswrapper[4809]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 26 14:15:13 crc kubenswrapper[4809]: if [ -e "${reg_dir_path}" ]; then Feb 26 14:15:13 crc kubenswrapper[4809]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 26 14:15:13 crc kubenswrapper[4809]: else Feb 26 14:15:13 crc kubenswrapper[4809]: mkdir $reg_dir_path Feb 26 14:15:13 crc kubenswrapper[4809]: cp $ca_file_path $reg_dir_path/ca.crt Feb 26 14:15:13 crc kubenswrapper[4809]: fi Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: for d in $(ls /etc/docker/certs.d); do Feb 26 14:15:13 crc kubenswrapper[4809]: echo $d Feb 26 14:15:13 crc kubenswrapper[4809]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 26 14:15:13 crc kubenswrapper[4809]: reg_conf_path="/tmp/serviceca/${dp}" Feb 26 14:15:13 crc kubenswrapper[4809]: if [ ! -e "${reg_conf_path}" ]; then Feb 26 14:15:13 crc kubenswrapper[4809]: rm -rf /etc/docker/certs.d/$d Feb 26 14:15:13 crc kubenswrapper[4809]: fi Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: sleep 60 & wait ${!} Feb 26 14:15:13 crc kubenswrapper[4809]: done Feb 26 14:15:13 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6lbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-pkjv8_openshift-image-registry(628aecc0-f33d-45bc-a351-897a05a70dff): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 14:15:13 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:15:13 crc kubenswrapper[4809]: E0226 14:15:13.617435 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-pkjv8" podUID="628aecc0-f33d-45bc-a351-897a05a70dff" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.626204 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.636207 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.647354 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.659682 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.675565 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.675617 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.675633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.675654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.675670 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.691061 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.735069 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.778968 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.779031 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.779050 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.779071 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.779089 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.786138 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.816030 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.850751 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.881582 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.881630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.881640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.881661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.881671 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.898330 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.932131 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.973756 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.984593 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.984662 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.984673 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.984710 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:13 crc kubenswrapper[4809]: I0226 14:15:13.984723 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:13Z","lastTransitionTime":"2026-02-26T14:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.086837 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.086874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.086883 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.086899 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.086910 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.189946 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.190080 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.190095 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.190113 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.190130 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.256429 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.256756 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.293137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.293185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.293197 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.293217 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.293230 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.396074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.396107 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.396115 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.396129 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.396139 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.499744 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.499830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.499848 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.499874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.499897 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.602773 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.602825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.602835 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.602850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.602863 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.706421 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.706471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.706479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.706496 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.706505 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.809279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.809335 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.809344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.809360 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.809371 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.912257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.912330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.912349 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.912378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.912396 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:14Z","lastTransitionTime":"2026-02-26T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.913680 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.913840 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:14 crc kubenswrapper[4809]: I0226 14:15:14.913870 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.913983 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.914072 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:18.914058094 +0000 UTC m=+97.387378617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.914122 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:18.914116526 +0000 UTC m=+97.387437039 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.914153 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:14 crc kubenswrapper[4809]: E0226 14:15:14.914171 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:18.914165797 +0000 UTC m=+97.387486320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014646 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014599 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014730 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.014744 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.014784 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.014817 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.014829 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.014889 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:19.014869455 +0000 UTC m=+97.488189978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.014992 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.015073 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.015100 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.015203 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:19.015174033 +0000 UTC m=+97.488494596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.119235 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.119310 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.119323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.119339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.119687 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.222403 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.222459 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.222469 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.222489 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.222501 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.256052 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.256106 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.256249 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:15 crc kubenswrapper[4809]: E0226 14:15:15.256393 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.325582 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.325659 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.325678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.325708 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.325730 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.428832 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.428874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.428883 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.428900 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.428911 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.531841 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.531921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.531934 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.531954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.531968 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.634282 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.634337 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.634350 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.634371 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.634387 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.737833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.737884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.737897 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.737917 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.737932 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.840596 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.840668 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.840692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.840720 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.840742 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.943445 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.943504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.943516 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.943536 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:15 crc kubenswrapper[4809]: I0226 14:15:15.943550 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:15Z","lastTransitionTime":"2026-02-26T14:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.047219 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.047263 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.047273 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.047289 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.047299 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.150469 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.150529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.150544 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.150567 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.150585 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.252832 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.252886 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.252896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.252912 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.252921 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.256319 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:16 crc kubenswrapper[4809]: E0226 14:15:16.256491 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.354839 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.354902 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.354912 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.354929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.354939 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.457909 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.457956 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.457969 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.457987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.458001 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.560634 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.560678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.560689 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.560706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.560718 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.663238 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.663283 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.663293 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.663313 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.663323 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.766297 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.766332 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.766343 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.766357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.766369 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.868760 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.868804 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.868813 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.868830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.868846 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.971663 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.971695 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.971703 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.971720 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:16 crc kubenswrapper[4809]: I0226 14:15:16.971731 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:16Z","lastTransitionTime":"2026-02-26T14:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.075098 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.075183 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.075196 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.075223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.075235 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.178050 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.178142 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.178169 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.178205 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.178229 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.256221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.256507 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:17 crc kubenswrapper[4809]: E0226 14:15:17.256977 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:17 crc kubenswrapper[4809]: E0226 14:15:17.257031 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.281317 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.281402 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.281428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.281461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.281485 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.327154 4809 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.385333 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.385398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.385411 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.385432 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.385446 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.409981 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.411233 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:15:17 crc kubenswrapper[4809]: E0226 14:15:17.411441 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.488425 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.488472 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.488487 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.488509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.488525 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.510789 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.591734 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.591778 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.591790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.591808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.591819 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.626200 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:15:17 crc kubenswrapper[4809]: E0226 14:15:17.626435 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.695420 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.695455 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.695466 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.695483 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.695495 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.798907 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.798958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.798971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.798989 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.799003 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.901857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.901927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.901948 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.901974 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:17 crc kubenswrapper[4809]: I0226 14:15:17.901995 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:17Z","lastTransitionTime":"2026-02-26T14:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.006757 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.006808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.006821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.006841 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.006858 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.109514 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.109612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.109635 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.109665 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.109684 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.212426 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.212471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.212486 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.212505 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.212521 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.251244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.251310 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.251323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.251344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.251359 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.256161 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.256347 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.262977 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.266846 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.266885 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.266898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.266916 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.266926 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.278502 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.282968 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.283029 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.283045 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.283062 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.283074 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.295106 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.300078 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.300136 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.300157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.300184 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.300206 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.313496 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.318341 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.318399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.318413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.318434 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.318446 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.330698 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.330903 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.332685 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.332732 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.332747 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.332767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.332782 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.436170 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.436223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.436240 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.436257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.436269 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.539260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.539308 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.539321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.539340 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.539355 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.640995 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.641116 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.641143 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.641180 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.641204 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.743957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.744265 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.744359 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.744450 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.744541 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.847322 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.847430 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.847502 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.847536 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.847557 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.950715 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.950779 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.950790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.950811 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.950823 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:18Z","lastTransitionTime":"2026-02-26T14:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.968397 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.968540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:18 crc kubenswrapper[4809]: I0226 14:15:18.968578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.968674 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:26.968637122 +0000 UTC m=+105.441957645 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.968712 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.968775 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:26.968758446 +0000 UTC m=+105.442078969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.968778 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:18 crc kubenswrapper[4809]: E0226 14:15:18.968825 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:26.968818427 +0000 UTC m=+105.442138950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.053551 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.053609 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.053622 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.053642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.053656 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.069281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.069330 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069475 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069527 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069544 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069611 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:27.069588507 +0000 UTC m=+105.542909110 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069486 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069664 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069681 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.069741 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:27.06972296 +0000 UTC m=+105.543043573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.156539 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.156606 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.156619 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.156640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.156654 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.255979 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.256170 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.256234 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:19 crc kubenswrapper[4809]: E0226 14:15:19.256444 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.259251 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.259319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.259339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.259379 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.259397 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.363137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.363449 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.363542 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.363633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.363716 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.465618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.465673 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.465682 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.465698 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.465710 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.568158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.568199 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.568208 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.568225 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.568235 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.671137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.671190 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.671216 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.671236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.671251 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.774414 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.774473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.774483 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.774504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.774514 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.877522 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.877570 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.877582 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.877600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.877613 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.979970 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.980030 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.980043 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.980062 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:19 crc kubenswrapper[4809]: I0226 14:15:19.980074 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:19Z","lastTransitionTime":"2026-02-26T14:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.084157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.084239 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.084259 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.084743 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.084908 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.189298 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.189357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.189374 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.189396 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.189412 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.255783 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:20 crc kubenswrapper[4809]: E0226 14:15:20.255970 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.292098 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.292137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.292147 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.292164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.292174 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.394773 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.394800 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.394809 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.394824 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.394833 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.497110 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.497170 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.497182 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.497200 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.497213 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.599755 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.599790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.599802 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.599819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.599832 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.702736 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.702774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.702785 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.702800 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.702811 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.805310 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.805349 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.805361 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.805379 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.805424 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.907724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.907757 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.907768 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.907784 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:20 crc kubenswrapper[4809]: I0226 14:15:20.907793 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:20Z","lastTransitionTime":"2026-02-26T14:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.009600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.009636 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.009647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.009661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.009671 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.111610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.111657 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.111669 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.111686 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.111698 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.214244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.214303 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.214314 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.214332 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.214342 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.256678 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.256689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:21 crc kubenswrapper[4809]: E0226 14:15:21.256919 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:21 crc kubenswrapper[4809]: E0226 14:15:21.256983 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.316880 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.316955 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.316973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.316999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.317044 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.419815 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.419879 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.419896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.419927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.419949 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.523115 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.523179 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.523256 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.523348 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.523379 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.626871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.626956 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.626997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.627069 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.627095 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.730192 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.730256 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.730276 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.730305 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.730323 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.833719 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.833776 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.833792 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.833812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.833825 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.936637 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.936692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.936714 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.936736 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:21 crc kubenswrapper[4809]: I0226 14:15:21.936747 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:21Z","lastTransitionTime":"2026-02-26T14:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.040064 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.040135 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.040161 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.040192 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.040216 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.142431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.142480 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.142494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.142513 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.142526 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.244827 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.244885 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.244895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.244913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.244927 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.255908 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:22 crc kubenswrapper[4809]: E0226 14:15:22.256062 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.268294 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.288135 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.302898 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.313892 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.325421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.341365 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.347158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.347215 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.347228 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.347249 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.347264 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.353471 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.365144 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.376144 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.390042 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.400482 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.412121 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.423411 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.433921 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.449833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.449895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.449910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.449929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.449943 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.551901 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.551941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.551951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.551967 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.551977 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.654910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.654962 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.654977 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.654997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.655032 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.757565 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.757596 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.757606 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.757624 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.757638 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.860503 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.860556 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.860568 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.860585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.860596 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.963736 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.963808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.963826 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.963850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:22 crc kubenswrapper[4809]: I0226 14:15:22.963862 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:22Z","lastTransitionTime":"2026-02-26T14:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.069104 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.069168 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.069181 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.069205 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.069218 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.171954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.172028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.172041 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.172055 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.172066 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.256272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.256340 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:23 crc kubenswrapper[4809]: E0226 14:15:23.256441 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:23 crc kubenswrapper[4809]: E0226 14:15:23.256528 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.275136 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.275176 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.275185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.275203 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.275213 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.377694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.377734 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.377742 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.377758 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.377768 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.480931 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.480997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.481034 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.481054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.481068 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.583857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.583920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.583929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.583947 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.583959 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.687584 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.687650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.687668 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.687693 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.687711 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.789798 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.789901 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.789916 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.789933 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.789943 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.893271 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.893339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.893358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.893392 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.893413 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.996242 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.996292 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.996304 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.996322 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:23 crc kubenswrapper[4809]: I0226 14:15:23.996335 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:23Z","lastTransitionTime":"2026-02-26T14:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.041543 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb"] Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.042611 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.046470 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.046648 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.061189 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.073634 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.085461 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.095248 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.099889 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.099949 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.099968 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.099993 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.100038 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.115709 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.125437 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ftq\" (UniqueName: \"kubernetes.io/projected/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-kube-api-access-b9ftq\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.125508 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.125543 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.125564 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.128236 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.143354 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.157158 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.172114 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.186952 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.195469 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.203051 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.203115 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.203136 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.203164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.203183 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.209927 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.224247 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.226764 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ftq\" (UniqueName: \"kubernetes.io/projected/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-kube-api-access-b9ftq\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.226926 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.226986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.227083 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.228272 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.228573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.231717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.236258 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.246269 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ftq\" (UniqueName: \"kubernetes.io/projected/f6ef5e93-b8e6-4ec8-b07f-841b17f321af-kube-api-access-b9ftq\") pod \"ovnkube-control-plane-749d76644c-vrglb\" (UID: \"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.256183 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:24 crc kubenswrapper[4809]: E0226 14:15:24.256827 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.263361 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.305852 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.305892 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.305920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.305936 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.305946 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.355972 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" Feb 26 14:15:24 crc kubenswrapper[4809]: W0226 14:15:24.369577 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ef5e93_b8e6_4ec8_b07f_841b17f321af.slice/crio-0bfb5ee63cc11c7ebfc997648ac10e46b622f5d6e324303b9d84f4cfe78496c0 WatchSource:0}: Error finding container 0bfb5ee63cc11c7ebfc997648ac10e46b622f5d6e324303b9d84f4cfe78496c0: Status 404 returned error can't find the container with id 0bfb5ee63cc11c7ebfc997648ac10e46b622f5d6e324303b9d84f4cfe78496c0 Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.409263 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.409309 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.409318 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.409358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.409368 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.512106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.512142 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.512151 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.512165 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.512176 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.614293 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.614334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.614348 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.614367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.614378 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.642574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" event={"ID":"f6ef5e93-b8e6-4ec8-b07f-841b17f321af","Type":"ContainerStarted","Data":"0bfb5ee63cc11c7ebfc997648ac10e46b622f5d6e324303b9d84f4cfe78496c0"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.717452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.717521 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.717538 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.717973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.718164 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.820588 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.820645 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.820676 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.820701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.820721 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.923681 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.923795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.923820 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.923858 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:24 crc kubenswrapper[4809]: I0226 14:15:24.923892 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:24Z","lastTransitionTime":"2026-02-26T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.027055 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.027107 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.027121 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.027138 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.027149 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.124158 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-55482"] Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.125144 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.125215 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.130310 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.130348 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.130358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.130374 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.130385 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.137172 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.147316 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.165381 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.184667 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.198780 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.206697 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.221045 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.229577 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.232361 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.232403 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.232414 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.232439 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.232451 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.237440 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czznw\" (UniqueName: \"kubernetes.io/projected/a8ccb95b-da48-49af-a2bf-4d10505c73ae-kube-api-access-czznw\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.237481 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.238326 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.245807 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.251643 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.256337 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.256365 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.256461 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.256709 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.259676 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.267983 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.275159 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.280266 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.286334 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.335091 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.335152 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.335162 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.335179 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.335190 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.338487 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czznw\" (UniqueName: \"kubernetes.io/projected/a8ccb95b-da48-49af-a2bf-4d10505c73ae-kube-api-access-czznw\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.338519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.338645 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.338701 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:25.83868895 +0000 UTC m=+104.312009473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.355204 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czznw\" (UniqueName: \"kubernetes.io/projected/a8ccb95b-da48-49af-a2bf-4d10505c73ae-kube-api-access-czznw\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.437196 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.437238 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.437248 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.437264 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.437275 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.539191 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.539244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.539257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.539275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.539292 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.642393 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.642432 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.642443 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.642461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.642472 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.645418 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" event={"ID":"f6ef5e93-b8e6-4ec8-b07f-841b17f321af","Type":"ContainerStarted","Data":"3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.646791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.648227 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.745633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.745691 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.745701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.745725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.745739 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.842725 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.842951 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:25 crc kubenswrapper[4809]: E0226 14:15:25.843063 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:26.84300266 +0000 UTC m=+105.316323243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.847930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.847984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.848000 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.848046 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.848064 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.950288 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.950328 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.950351 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.950378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:25 crc kubenswrapper[4809]: I0226 14:15:25.950391 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:25Z","lastTransitionTime":"2026-02-26T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.053150 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.053195 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.053208 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.053221 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.053232 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.155462 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.155508 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.155520 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.155538 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.155555 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.256037 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:26 crc kubenswrapper[4809]: E0226 14:15:26.256850 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.257946 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.257972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.257984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.258003 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.258032 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.361085 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.361131 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.361143 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.361164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.361178 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.463947 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.463995 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.464009 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.464042 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.464053 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.567097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.567148 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.567157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.567174 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.567184 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.653489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.655574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hc768" event={"ID":"3c91705d-1fab-4240-8e70-b3e01e220a8c","Type":"ContainerStarted","Data":"dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.657898 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" event={"ID":"f6ef5e93-b8e6-4ec8-b07f-841b17f321af","Type":"ContainerStarted","Data":"c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.659545 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa" exitCode=0 Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.659597 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.661281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.664646 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.670312 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.670342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.670351 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.670365 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.670377 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.682342 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.692469 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.703247 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.717520 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.726732 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.738661 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.750799 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.760007 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.770489 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.773287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.773371 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.773395 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.773428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.773451 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.780534 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.790414 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.805190 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.820079 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.836121 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.846817 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.854694 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.854785 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:26 crc kubenswrapper[4809]: E0226 14:15:26.854903 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:26 crc kubenswrapper[4809]: E0226 14:15:26.854963 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:28.854943282 +0000 UTC m=+107.328263805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.862505 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.872342 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.876466 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.876500 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.876510 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.876526 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.876536 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.883323 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.893884 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.904412 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.923409 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.936635 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.951376 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.963241 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.975388 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.979242 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.979316 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.979342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.979372 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.979397 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:26Z","lastTransitionTime":"2026-02-26T14:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:26 crc kubenswrapper[4809]: I0226 14:15:26.991720 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:26Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.006528 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.021567 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.044579 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.056274 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.056447 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.056538 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.056624 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:15:43.056582133 +0000 UTC m=+121.529902656 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.056676 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.056749 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:43.056727857 +0000 UTC m=+121.530048570 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.056904 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.056999 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:43.056973364 +0000 UTC m=+121.530293887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.058962 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.082546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.082590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.082612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.082630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.082642 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.157703 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.157751 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.157909 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.157937 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.157951 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.158004 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:43.15798731 +0000 UTC m=+121.631307843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.157909 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.158047 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.158061 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.158096 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:15:43.158084773 +0000 UTC m=+121.631405306 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.186281 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.186362 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.186407 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.186428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.186441 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.256313 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.256390 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.256424 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.256625 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.257167 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:27 crc kubenswrapper[4809]: E0226 14:15:27.257251 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.288821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.288875 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.288888 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.288908 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.288922 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.392307 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.392379 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.392404 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.392440 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.392468 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.495811 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.495856 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.495866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.495883 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.495894 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.598573 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.598623 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.598633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.598652 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.598666 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.668425 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8" exitCode=0 Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.668579 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.681992 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.697899 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.700517 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.700545 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.700557 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.700575 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.700587 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.713895 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.725891 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.741568 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.755513 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.776376 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.799715 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.803777 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.803819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.803831 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.803853 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.803870 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.837046 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.852827 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.868287 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.883804 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.899389 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.906735 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.906774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.906787 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.906805 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.906818 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:27Z","lastTransitionTime":"2026-02-26T14:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.916804 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.936617 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.958893 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.977745 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:27 crc kubenswrapper[4809]: I0226 14:15:27.990406 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:27Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.010454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.010508 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.010518 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.010538 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.010549 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.020742 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.039728 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.054367 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.067491 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.079760 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.091622 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.104905 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.113210 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.113249 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.113260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.113277 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.113290 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.120779 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.134931 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.149590 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.163494 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.176280 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.200131 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.213413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.215528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.215561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.215573 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.215592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.215603 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.256574 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.256863 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.318851 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.319198 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.319207 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.319220 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.319232 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.421705 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.421747 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.421758 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.421775 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.421785 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.523958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.524005 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.524042 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.524062 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.524076 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.626680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.626728 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.626737 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.626755 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.626764 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.647936 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.647983 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.648034 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.648054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.648068 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.667722 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.672564 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.672610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.672620 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.672638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.672650 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.678891 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerStarted","Data":"942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.684103 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerStarted","Data":"d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.689051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.689095 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.689106 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.690411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pkjv8" event={"ID":"628aecc0-f33d-45bc-a351-897a05a70dff","Type":"ContainerStarted","Data":"06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.694750 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.694955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496"} Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.696380 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.705247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.705322 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.705361 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.705387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.705402 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.720109 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.722518 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.726432 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.726520 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.726532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.726553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.726566 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.739786 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.739783 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.745143 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.745200 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.745211 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.745230 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.745242 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.757810 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.760395 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.760572 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.767928 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.767968 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.767978 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.767997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.768026 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.778872 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.798686 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.814346 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.828506 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.851553 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.865452 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.870576 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.870622 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.870634 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.870653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.870665 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.875473 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.875650 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:28 crc kubenswrapper[4809]: E0226 14:15:28.875726 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:32.875706918 +0000 UTC m=+111.349027441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.879438 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.893114 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.910275 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.930607 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.944336 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.955035 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.971346 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.972928 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.972963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.972972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.972987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.973000 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:28Z","lastTransitionTime":"2026-02-26T14:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.983616 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:28 crc kubenswrapper[4809]: I0226 14:15:28.995328 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:28Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.007297 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.021117 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.033107 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.050646 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.065353 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.076612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.076654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.076677 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.076697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.076710 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.082212 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.092004 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.111561 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.124089 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.135873 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.150074 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.173525 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.179641 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.179679 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.179689 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.179706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.179717 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.189149 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.256610 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.256705 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.256758 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:29 crc kubenswrapper[4809]: E0226 14:15:29.256759 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:29 crc kubenswrapper[4809]: E0226 14:15:29.257071 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:29 crc kubenswrapper[4809]: E0226 14:15:29.257133 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.257524 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:15:29 crc kubenswrapper[4809]: E0226 14:15:29.257802 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.281638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.281694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.281706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.281724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.281736 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.384342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.384380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.384390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.384405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.384414 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.486998 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.487068 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.487084 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.487106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.487120 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.589742 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.589777 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.589786 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.589802 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.589810 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.692773 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.692805 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.692815 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.692829 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.692837 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.699805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.702039 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.702034 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8" exitCode=0 Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.706540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.706578 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.706591 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.713043 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.727493 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.745306 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.762393 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.774311 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.787193 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.794870 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.794910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.794919 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.794935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.794945 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.809780 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.840802 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.860768 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.873582 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.883769 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.893722 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.897187 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.897403 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.897411 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.897426 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.897435 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.906183 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.924519 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.941194 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.954822 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.973571 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.987858 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.999807 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.999849 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.999862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:29 crc kubenswrapper[4809]: I0226 14:15:29.999882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:29.999895 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:29Z","lastTransitionTime":"2026-02-26T14:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.000801 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:29Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.010640 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.021895 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.035064 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.051933 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.069456 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.081867 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.096811 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.103102 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.103145 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.103154 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.103173 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.103184 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.110988 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.132789 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.147696 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.165612 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.180395 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.195992 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.205803 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.205845 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.205855 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.205871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.205883 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.256317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:30 crc kubenswrapper[4809]: E0226 14:15:30.256473 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.308253 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.308309 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.308321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.308341 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.308353 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.411076 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.411129 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.411145 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.411178 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.411199 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.514880 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.514935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.514951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.514974 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.514990 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.618053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.618101 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.618113 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.618130 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.618143 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.712179 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c" exitCode=0 Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.712208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.719862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.719913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.719929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.719951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.719967 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.726997 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.749896 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.772826 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.785116 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.798321 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.809424 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.822402 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.823390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.823433 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.823447 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.823465 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.823478 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.834973 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.845969 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.856407 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.868081 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.882823 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.901663 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.914843 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.925613 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.925652 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.925661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.925676 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.925689 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:30Z","lastTransitionTime":"2026-02-26T14:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.930127 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:30 crc kubenswrapper[4809]: I0226 14:15:30.943582 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.028072 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.028114 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.028122 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.028139 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.028148 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.130411 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.130458 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.130474 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.130498 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.130515 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.233174 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.233224 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.233235 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.233262 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.233276 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.256073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.256225 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:31 crc kubenswrapper[4809]: E0226 14:15:31.256413 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.256515 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:31 crc kubenswrapper[4809]: E0226 14:15:31.256561 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:31 crc kubenswrapper[4809]: E0226 14:15:31.256662 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.335688 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.335746 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.335763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.335790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.335807 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.438377 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.438418 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.438439 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.438460 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.438475 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.541755 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.541816 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.541835 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.541861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.541880 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.645270 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.645323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.645340 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.645367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.645384 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.720597 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825" exitCode=0 Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.720678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.728539 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.741866 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.747653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.747690 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.747699 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.747715 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.747725 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.762265 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.776851 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.794738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.807660 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.837807 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850056 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850090 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850104 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850122 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850136 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.850115 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.862797 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.876469 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.890918 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.905270 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.916198 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.931170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.942959 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.951902 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.951939 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.951948 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.951963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.951972 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:31Z","lastTransitionTime":"2026-02-26T14:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.955228 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:31 crc kubenswrapper[4809]: I0226 14:15:31.972143 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:31Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.054819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.054865 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.054875 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.054894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.054905 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.157910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.157974 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.157991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.158044 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.158066 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.256824 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:32 crc kubenswrapper[4809]: E0226 14:15:32.257036 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.264874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.264913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.264933 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.264950 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.264966 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.282440 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.297778 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.312589 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.325476 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.337987 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.361142 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.367730 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.367776 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.367788 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.367806 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.367818 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.375980 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.386126 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.396281 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.406985 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.432958 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.448222 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.460332 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.469929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.469975 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.469986 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.470004 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.470040 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.474098 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.487111 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.500406 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.572334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.572378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.572387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.572403 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.572414 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.674821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.674880 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.674893 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.674911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.674925 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.735286 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1" exitCode=0 Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.735371 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.750246 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.776421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.777522 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.777627 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.777638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.777657 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.777669 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.789430 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.801501 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.817036 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.827682 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.839838 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.859427 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.872662 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.881068 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.881109 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.881121 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.881139 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.881151 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.883628 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.899071 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.911824 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.919928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:32 crc kubenswrapper[4809]: E0226 14:15:32.920081 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:32 crc kubenswrapper[4809]: E0226 14:15:32.920136 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:40.920120571 +0000 UTC m=+119.393441094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.923430 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.934791 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.948274 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.965989 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.983924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.983959 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.983969 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.983987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:32 crc kubenswrapper[4809]: I0226 14:15:32.983999 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:32Z","lastTransitionTime":"2026-02-26T14:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.086874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.086914 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.086924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.086939 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.086949 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.188944 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.188981 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.188990 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.189005 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.189042 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.255706 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.255813 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:33 crc kubenswrapper[4809]: E0226 14:15:33.256061 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:33 crc kubenswrapper[4809]: E0226 14:15:33.255832 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.256181 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:33 crc kubenswrapper[4809]: E0226 14:15:33.256271 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.292258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.292300 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.292309 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.292326 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.292336 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.395117 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.395158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.395172 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.395189 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.395200 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.497225 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.497273 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.497287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.497305 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.497320 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.600481 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.600532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.600547 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.600569 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.600581 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.703046 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.703092 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.703108 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.703124 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.703135 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.744057 4809 generic.go:334] "Generic (PLEG): container finished" podID="021874d0-ff73-40e4-97aa-2f72d648e289" containerID="5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81" exitCode=0 Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.744144 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerDied","Data":"5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.750651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.750999 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.751057 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.763433 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.776487 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.793336 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.794767 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.805717 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.805786 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.805798 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.805816 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.805832 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.808210 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.824336 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.837304 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.851899 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.873645 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.886454 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.898367 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.908790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.908838 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.908849 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.908865 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.908877 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:33Z","lastTransitionTime":"2026-02-26T14:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.912680 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.925957 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.939838 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.955618 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.976365 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:33 crc kubenswrapper[4809]: I0226 14:15:33.989697 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:33Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.007738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.011586 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.011682 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.011699 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.011722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.011738 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.020461 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.039413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.054165 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.068926 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.080672 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.094751 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.105440 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.115555 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.116150 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.116166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.116185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.116199 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.116404 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.126121 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.146877 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.161545 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.174657 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.187382 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.200438 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.212620 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.218296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.218334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.218345 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.218363 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.218375 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.256752 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:34 crc kubenswrapper[4809]: E0226 14:15:34.256912 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.320950 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.320999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.321030 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.321054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.321068 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.423768 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.423814 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.423826 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.423844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.423855 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.526504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.526541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.526549 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.526565 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.526573 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.629572 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.629618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.629632 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.629649 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.629662 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.732430 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.732464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.732471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.732485 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.732493 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.758174 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" event={"ID":"021874d0-ff73-40e4-97aa-2f72d648e289","Type":"ContainerStarted","Data":"fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.758617 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.773740 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.780684 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.789156 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.803954 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.817002 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.835502 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.835562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.835578 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.835601 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.835616 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.837423 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.851802 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.863873 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.875954 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.890602 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.900303 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.911406 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.922491 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.938413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.938453 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.938463 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.938477 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.938487 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:34Z","lastTransitionTime":"2026-02-26T14:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.941875 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.953269 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.973526 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.984034 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:34 crc kubenswrapper[4809]: I0226 14:15:34.996376 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:34Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.008696 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.029160 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.041006 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.041065 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.041077 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.041097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.041108 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.044213 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.061118 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.074485 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.094204 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.108229 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.122742 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.135034 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.143437 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.143479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.143492 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.143508 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.143520 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.145779 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.156258 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.169033 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.182706 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.193522 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.204538 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:35Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.245699 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.245727 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.245734 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.245749 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.245757 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.256030 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:35 crc kubenswrapper[4809]: E0226 14:15:35.256173 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.256526 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:35 crc kubenswrapper[4809]: E0226 14:15:35.256594 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.256650 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:35 crc kubenswrapper[4809]: E0226 14:15:35.256711 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.348328 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.348363 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.348374 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.348391 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.348413 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.450983 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.451038 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.451050 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.451066 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.451078 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.553194 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.553235 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.553244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.553260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.553271 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.655182 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.655224 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.655236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.655254 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.655266 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.758106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.758158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.758172 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.758191 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.758204 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.861378 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.861435 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.861450 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.861470 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.861487 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.963680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.963716 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.963725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.963741 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:35 crc kubenswrapper[4809]: I0226 14:15:35.963751 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:35Z","lastTransitionTime":"2026-02-26T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.070377 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.070423 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.070435 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.070454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.070466 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.172875 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.172922 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.172931 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.172947 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.172955 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.257930 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:36 crc kubenswrapper[4809]: E0226 14:15:36.258777 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.275174 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.275222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.275236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.275258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.275269 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.377720 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.377767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.377778 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.377796 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.377806 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.480625 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.480693 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.480712 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.480742 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.480761 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.583980 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.584097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.584124 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.584158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.584182 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.687373 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.687424 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.687437 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.687455 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.687469 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.779580 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/0.log" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.783889 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2" exitCode=1 Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.783936 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.784938 4809 scope.go:117] "RemoveContainer" containerID="24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.789597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.789643 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.789657 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.789674 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.789687 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.801592 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.814908 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.828131 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.841079 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.856172 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.870617 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.888296 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:36Z\\\",\\\"message\\\":\\\"77 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148098 6677 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 14:15:36.148211 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 14:15:36.148309 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148528 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148852 6677 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 14:15:36.148868 6677 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 14:15:36.148894 6677 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 14:15:36.148920 6677 factory.go:656] Stopping watch factory\\\\nI0226 14:15:36.148932 6677 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 14:15:36.148940 6677 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 14:15:36.148946 6677 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.891667 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.891720 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.891749 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.891770 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.891782 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.900290 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.914698 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.925198 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.937176 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.960856 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.976520 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.989940 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.993514 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.993552 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.993563 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.993581 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:36 crc kubenswrapper[4809]: I0226 14:15:36.993596 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:36Z","lastTransitionTime":"2026-02-26T14:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.000451 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:36Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.010460 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.095911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.095954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.095965 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.095983 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.095994 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.217096 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.217260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.217287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.217312 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.217327 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.256064 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.256195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:37 crc kubenswrapper[4809]: E0226 14:15:37.256233 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.256079 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:37 crc kubenswrapper[4809]: E0226 14:15:37.256393 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:37 crc kubenswrapper[4809]: E0226 14:15:37.256541 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.320727 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.320762 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.320774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.320790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.320800 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.422954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.423006 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.423042 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.423066 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.423087 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.524781 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.524809 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.524820 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.524833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.524841 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.627586 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.627627 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.627638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.627653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.627662 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.730839 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.730909 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.730927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.730952 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.730977 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.788103 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/0.log" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.789970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.790917 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.799841 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.833412 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.833439 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.833448 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.833464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.833473 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.837536 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.857712 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.875580 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.889000 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.900596 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.911526 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.923330 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.936132 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.936166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.936177 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.936195 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.936208 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:37Z","lastTransitionTime":"2026-02-26T14:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.944491 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.958661 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.968611 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.981172 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:37 crc kubenswrapper[4809]: I0226 14:15:37.993781 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.010758 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:36Z\\\",\\\"message\\\":\\\"77 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148098 6677 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 14:15:36.148211 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 14:15:36.148309 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148528 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148852 6677 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 14:15:36.148868 6677 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 14:15:36.148894 6677 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 14:15:36.148920 6677 factory.go:656] Stopping watch factory\\\\nI0226 14:15:36.148932 6677 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 14:15:36.148940 6677 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 14:15:36.148946 6677 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.024628 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038111 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038155 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038168 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038189 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038203 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.038700 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.141403 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.141461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.141473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.141494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.141507 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.243904 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.243971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.243981 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.244001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.244037 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.256715 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:38 crc kubenswrapper[4809]: E0226 14:15:38.256904 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.346363 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.346409 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.346423 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.346441 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.346453 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.449334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.449370 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.449379 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.449393 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.449403 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.552385 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.552442 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.552454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.552472 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.552484 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.655506 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.655543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.655553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.655572 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.655586 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.759032 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.759090 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.759109 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.759137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.759155 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.795566 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/1.log" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.796549 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/0.log" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.800546 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8" exitCode=1 Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.800601 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.800660 4809 scope.go:117] "RemoveContainer" containerID="24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.801741 4809 scope.go:117] "RemoveContainer" containerID="80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8" Feb 26 14:15:38 crc kubenswrapper[4809]: E0226 14:15:38.802110 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.826631 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.847106 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862065 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862388 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862400 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862416 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.862432 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.874550 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.886811 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.903144 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.922937 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.939500 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.956004 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.965221 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.965262 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.965273 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.965289 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.965301 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.972350 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.980767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.980841 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.980850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.980866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.980875 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.988131 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: E0226 14:15:38.994950 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.999734 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.999780 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.999791 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:38 crc kubenswrapper[4809]: I0226 14:15:38.999810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:38.999824 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:38Z","lastTransitionTime":"2026-02-26T14:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.008726 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:38Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.014304 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.018769 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.018808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.018821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.018840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.018852 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.021738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.032379 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.035108 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.037137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.037184 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.037196 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.037220 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.037233 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.049951 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.054912 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.058931 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.059007 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.059040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.059065 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.059078 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.072201 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.072322 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.073819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.073851 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.073866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.073883 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.073896 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.075879 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24b618974ca4659aaab6d47e1851dd46018f93ed45a294c05488a9fe487584b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:36Z\\\",\\\"message\\\":\\\"77 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148098 6677 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 14:15:36.148211 6677 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0226 14:15:36.148309 6677 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148528 6677 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0226 14:15:36.148852 6677 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0226 14:15:36.148868 6677 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0226 14:15:36.148894 6677 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 14:15:36.148920 6677 factory.go:656] Stopping watch factory\\\\nI0226 14:15:36.148932 6677 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 14:15:36.148940 6677 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 14:15:36.148946 6677 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.176399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.176796 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.176808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.176825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.176834 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.256435 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.256547 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.256600 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.256616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.256766 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.256860 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.279540 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.279593 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.279609 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.279631 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.279643 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.382082 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.382128 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.382140 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.382161 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.382171 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.485228 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.485270 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.485279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.485293 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.485304 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.587285 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.587340 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.587353 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.587373 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.587386 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.689600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.689642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.689660 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.689676 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.689686 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.792336 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.792406 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.792464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.792488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.792505 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.807567 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/1.log" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.811819 4809 scope.go:117] "RemoveContainer" containerID="80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8" Feb 26 14:15:39 crc kubenswrapper[4809]: E0226 14:15:39.812204 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.825586 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.843468 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.861433 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.872772 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.887977 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.894824 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.895549 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.895566 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.895588 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.895604 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.902215 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.923573 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.937028 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.951346 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.962192 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.974682 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.988101 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.998080 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.998133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.998144 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.998162 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:39 crc kubenswrapper[4809]: I0226 14:15:39.998174 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:39Z","lastTransitionTime":"2026-02-26T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.000316 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:39Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.011941 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.031448 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.044635 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.101142 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.101195 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.101208 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.101228 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.101241 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.203788 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.203833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.203845 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.203863 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.203877 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.256380 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:40 crc kubenswrapper[4809]: E0226 14:15:40.256547 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.306212 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.306279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.306303 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.306331 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.306348 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.408680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.408724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.408735 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.408752 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.408763 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.511347 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.511399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.511412 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.511431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.511470 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.614047 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.614100 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.614111 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.614128 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.614138 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.718163 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.718213 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.718247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.718267 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.718278 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.820848 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.820896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.820905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.820924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.820933 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.923493 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.923539 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.923549 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.923567 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:40 crc kubenswrapper[4809]: I0226 14:15:40.923577 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:40Z","lastTransitionTime":"2026-02-26T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.011570 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:41 crc kubenswrapper[4809]: E0226 14:15:41.011746 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:41 crc kubenswrapper[4809]: E0226 14:15:41.011819 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:15:57.011802927 +0000 UTC m=+135.485123450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.025457 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.025494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.025505 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.025523 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.025537 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.127706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.127750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.127759 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.127774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.127785 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.230338 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.230376 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.230385 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.230400 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.230411 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.255774 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.255832 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.255901 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:41 crc kubenswrapper[4809]: E0226 14:15:41.255918 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:41 crc kubenswrapper[4809]: E0226 14:15:41.255995 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:41 crc kubenswrapper[4809]: E0226 14:15:41.256082 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.334392 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.334456 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.334471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.334493 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.334517 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.437392 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.437443 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.437454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.437470 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.437481 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.540410 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.540471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.540486 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.540506 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.540522 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.643128 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.643169 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.643180 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.643199 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.643212 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.746169 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.746217 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.746227 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.746242 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.746254 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.848808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.848850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.848862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.848882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.848893 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.951841 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.951905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.951922 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.951946 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:41 crc kubenswrapper[4809]: I0226 14:15:41.951964 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:41Z","lastTransitionTime":"2026-02-26T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.054688 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.054729 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.054741 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.054756 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.054766 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:42Z","lastTransitionTime":"2026-02-26T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.157886 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.158267 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.158280 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.158301 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.158313 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:42Z","lastTransitionTime":"2026-02-26T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.255716 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:42 crc kubenswrapper[4809]: E0226 14:15:42.255926 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:42 crc kubenswrapper[4809]: E0226 14:15:42.258516 4809 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.272264 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.288633 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.308585 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.322395 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.340539 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.350884 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: E0226 14:15:42.358177 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.374568 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.386960 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.397763 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.411890 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.424331 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.436799 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.454790 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.468319 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.480198 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:42 crc kubenswrapper[4809]: I0226 14:15:42.490651 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.133730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.133962 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:16:15.133933268 +0000 UTC m=+153.607253791 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.134105 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.134176 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.134315 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.134319 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.134364 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:16:15.13435635 +0000 UTC m=+153.607676873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.134450 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:16:15.134416632 +0000 UTC m=+153.607737215 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.235272 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.235325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235532 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235552 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235566 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235580 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235627 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:16:15.235611103 +0000 UTC m=+153.708931646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235632 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235654 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.235736 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:16:15.235709706 +0000 UTC m=+153.709030289 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.255963 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.255985 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.256037 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.256549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.256596 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:43 crc kubenswrapper[4809]: E0226 14:15:43.256690 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.256701 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.266983 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.826575 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.828124 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6"} Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.828797 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.854876 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.869886 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.882267 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.894402 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.918958 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.933743 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.946137 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.958620 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.971338 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.982026 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:43 crc kubenswrapper[4809]: I0226 14:15:43.991518 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:43Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.002189 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.023082 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.038234 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.049964 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.062535 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.077715 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:44Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:44 crc kubenswrapper[4809]: I0226 14:15:44.255915 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:44 crc kubenswrapper[4809]: E0226 14:15:44.256081 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:45 crc kubenswrapper[4809]: I0226 14:15:45.256441 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:45 crc kubenswrapper[4809]: I0226 14:15:45.256562 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:45 crc kubenswrapper[4809]: E0226 14:15:45.256725 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:45 crc kubenswrapper[4809]: I0226 14:15:45.256784 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:45 crc kubenswrapper[4809]: E0226 14:15:45.256852 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:45 crc kubenswrapper[4809]: E0226 14:15:45.257167 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:46 crc kubenswrapper[4809]: I0226 14:15:46.255936 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:46 crc kubenswrapper[4809]: E0226 14:15:46.256088 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:47 crc kubenswrapper[4809]: I0226 14:15:47.255913 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:47 crc kubenswrapper[4809]: I0226 14:15:47.255946 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:47 crc kubenswrapper[4809]: I0226 14:15:47.255991 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:47 crc kubenswrapper[4809]: E0226 14:15:47.256144 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:47 crc kubenswrapper[4809]: E0226 14:15:47.256290 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:47 crc kubenswrapper[4809]: E0226 14:15:47.256402 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:47 crc kubenswrapper[4809]: E0226 14:15:47.360064 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:15:48 crc kubenswrapper[4809]: I0226 14:15:48.256283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:48 crc kubenswrapper[4809]: E0226 14:15:48.256452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.137768 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.137825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.137838 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.137855 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.137868 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:49Z","lastTransitionTime":"2026-02-26T14:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.150319 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:49Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.153921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.153971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.153984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.154034 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.154048 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:49Z","lastTransitionTime":"2026-02-26T14:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.166620 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:49Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.171082 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.171121 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.171132 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.171147 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.171159 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:49Z","lastTransitionTime":"2026-02-26T14:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.183673 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:49Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.188061 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.188111 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.188128 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.188153 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.188170 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:49Z","lastTransitionTime":"2026-02-26T14:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.200563 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:49Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.204222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.204273 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.204291 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.204310 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.204324 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:49Z","lastTransitionTime":"2026-02-26T14:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.217540 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:49Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.217660 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.256407 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.256549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.256897 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.256964 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:49 crc kubenswrapper[4809]: I0226 14:15:49.257033 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:49 crc kubenswrapper[4809]: E0226 14:15:49.257099 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:50 crc kubenswrapper[4809]: I0226 14:15:50.256091 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:50 crc kubenswrapper[4809]: E0226 14:15:50.256228 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.256335 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.256498 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:51 crc kubenswrapper[4809]: E0226 14:15:51.256657 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.256866 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:51 crc kubenswrapper[4809]: E0226 14:15:51.257093 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:51 crc kubenswrapper[4809]: E0226 14:15:51.257248 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.257909 4809 scope.go:117] "RemoveContainer" containerID="80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.280692 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.856180 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/1.log" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.859657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a"} Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.860336 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.874390 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.890749 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.904777 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.926263 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.942686 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.959385 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.980279 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:51 crc kubenswrapper[4809]: I0226 14:15:51.993452 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:51Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.004098 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.016132 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.030530 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.050162 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.067067 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.079676 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.090429 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.102573 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.115658 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.134038 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.256207 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:52 crc kubenswrapper[4809]: E0226 14:15:52.256324 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.270307 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.281397 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.294123 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.313568 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.327714 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.339077 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.350321 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: E0226 14:15:52.360473 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.368468 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.383883 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.397886 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.409349 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.419391 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.432146 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.444579 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.462155 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.473978 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.485818 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.495225 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.868267 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/2.log" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.869787 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/1.log" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.874130 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" exitCode=1 Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.874202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a"} Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.874292 4809 scope.go:117] "RemoveContainer" containerID="80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.875234 4809 scope.go:117] "RemoveContainer" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" Feb 26 14:15:52 crc kubenswrapper[4809]: E0226 14:15:52.875491 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.895259 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.911506 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.923917 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.937195 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.950092 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.968998 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.983472 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:52 crc kubenswrapper[4809]: I0226 14:15:52.994833 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:52Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.005789 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.017358 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.031750 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.045534 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.057590 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.070620 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.088951 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80931285a6a22d43be012c11c821502f5b745558ad144dcabd7dfe0feac935a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:38Z\\\",\\\"message\\\":\\\"imeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:37Z is after 2025-08-24T17:21:41Z]\\\\nI0226 14:15:37.994134 6803 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0226 14:15:37.994157 6803 services_controller.go:434] Service openshift-network-diagnostics/network-check-target retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{network-check-target openshift-network-diagnostics 3e2ce0c7-84ea-44e4-bf4a-d2f8388134f5 2812 0 2025-02-23 05:21:38 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc0075c53b7 0xc0075c53b8}] [] []},Spec:ServiceSpec{Ports:[]ServicePor\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.100966 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.116186 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.126554 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.255989 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.255989 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.256214 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:53 crc kubenswrapper[4809]: E0226 14:15:53.256295 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:53 crc kubenswrapper[4809]: E0226 14:15:53.256129 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:53 crc kubenswrapper[4809]: E0226 14:15:53.256354 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.878557 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/2.log" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.883067 4809 scope.go:117] "RemoveContainer" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" Feb 26 14:15:53 crc kubenswrapper[4809]: E0226 14:15:53.883325 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.894475 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.906314 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.919093 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.929102 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.939234 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.952219 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.966271 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.982413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:53 crc kubenswrapper[4809]: I0226 14:15:53.996711 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:53Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.012961 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.023160 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.034980 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.056998 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.071861 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.084330 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.096208 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.108656 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.118993 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:54Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:54 crc kubenswrapper[4809]: I0226 14:15:54.255894 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:54 crc kubenswrapper[4809]: E0226 14:15:54.256045 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:55 crc kubenswrapper[4809]: I0226 14:15:55.255680 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:55 crc kubenswrapper[4809]: I0226 14:15:55.255750 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:55 crc kubenswrapper[4809]: I0226 14:15:55.255817 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:55 crc kubenswrapper[4809]: E0226 14:15:55.255868 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:55 crc kubenswrapper[4809]: E0226 14:15:55.255906 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:55 crc kubenswrapper[4809]: E0226 14:15:55.255812 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:56 crc kubenswrapper[4809]: I0226 14:15:56.258350 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:56 crc kubenswrapper[4809]: E0226 14:15:56.258876 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:56 crc kubenswrapper[4809]: I0226 14:15:56.273357 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 14:15:57 crc kubenswrapper[4809]: I0226 14:15:57.080722 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.080924 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.081192 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:16:29.081168333 +0000 UTC m=+167.554488896 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:15:57 crc kubenswrapper[4809]: I0226 14:15:57.256620 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:57 crc kubenswrapper[4809]: I0226 14:15:57.256652 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.256772 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:57 crc kubenswrapper[4809]: I0226 14:15:57.257152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.257290 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.257352 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:57 crc kubenswrapper[4809]: E0226 14:15:57.361888 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.120349 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.133179 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.145583 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.161302 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.173811 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.186209 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.199795 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.211814 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.235879 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.251222 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.256044 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:15:58 crc kubenswrapper[4809]: E0226 14:15:58.256176 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.264906 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.276520 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.286567 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.298584 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.307089 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.327229 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.340754 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.351036 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.364058 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:58 crc kubenswrapper[4809]: I0226 14:15:58.376785 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:58Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.255977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.256102 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.256144 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.255986 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.256259 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.256584 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.528104 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.528151 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.528163 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.528180 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.528192 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:59Z","lastTransitionTime":"2026-02-26T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.541141 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:59Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.545449 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.545516 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.545554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.545579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.545596 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:59Z","lastTransitionTime":"2026-02-26T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.569738 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:59Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.573990 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.574045 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.574058 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.574074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.574087 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:59Z","lastTransitionTime":"2026-02-26T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.592174 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:59Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.596847 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.596897 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.596906 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.596920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.596932 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:59Z","lastTransitionTime":"2026-02-26T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.611990 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:59Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.615528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.615564 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.615575 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.615606 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:15:59 crc kubenswrapper[4809]: I0226 14:15:59.615621 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:15:59Z","lastTransitionTime":"2026-02-26T14:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.627278 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:15:59Z is after 2025-08-24T17:21:41Z" Feb 26 14:15:59 crc kubenswrapper[4809]: E0226 14:15:59.627435 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:16:00 crc kubenswrapper[4809]: I0226 14:16:00.255982 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:00 crc kubenswrapper[4809]: E0226 14:16:00.256148 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:01 crc kubenswrapper[4809]: I0226 14:16:01.256317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:01 crc kubenswrapper[4809]: I0226 14:16:01.256372 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:01 crc kubenswrapper[4809]: E0226 14:16:01.257049 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:01 crc kubenswrapper[4809]: I0226 14:16:01.256371 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:01 crc kubenswrapper[4809]: E0226 14:16:01.257180 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:01 crc kubenswrapper[4809]: E0226 14:16:01.257215 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.256506 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:02 crc kubenswrapper[4809]: E0226 14:16:02.256688 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.273356 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.288235 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.299830 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.312412 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.324927 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.340715 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: E0226 14:16:02.362408 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.363102 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.383082 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.400697 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.415109 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.430799 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.452500 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.470821 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.484489 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.500597 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.520398 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.534289 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.551924 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:02 crc kubenswrapper[4809]: I0226 14:16:02.569377 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:02Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:03 crc kubenswrapper[4809]: I0226 14:16:03.255676 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:03 crc kubenswrapper[4809]: I0226 14:16:03.255720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:03 crc kubenswrapper[4809]: I0226 14:16:03.255778 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:03 crc kubenswrapper[4809]: E0226 14:16:03.255820 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:03 crc kubenswrapper[4809]: E0226 14:16:03.255907 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:03 crc kubenswrapper[4809]: E0226 14:16:03.256046 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:04 crc kubenswrapper[4809]: I0226 14:16:04.256061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:04 crc kubenswrapper[4809]: E0226 14:16:04.256235 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:05 crc kubenswrapper[4809]: I0226 14:16:05.256363 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:05 crc kubenswrapper[4809]: I0226 14:16:05.256469 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:05 crc kubenswrapper[4809]: I0226 14:16:05.256467 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:05 crc kubenswrapper[4809]: E0226 14:16:05.256640 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:05 crc kubenswrapper[4809]: E0226 14:16:05.256784 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:05 crc kubenswrapper[4809]: I0226 14:16:05.257722 4809 scope.go:117] "RemoveContainer" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" Feb 26 14:16:05 crc kubenswrapper[4809]: E0226 14:16:05.257743 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:05 crc kubenswrapper[4809]: E0226 14:16:05.258098 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:16:06 crc kubenswrapper[4809]: I0226 14:16:06.255772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:06 crc kubenswrapper[4809]: E0226 14:16:06.255983 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:07 crc kubenswrapper[4809]: I0226 14:16:07.256155 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:07 crc kubenswrapper[4809]: I0226 14:16:07.256297 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:07 crc kubenswrapper[4809]: E0226 14:16:07.256300 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:07 crc kubenswrapper[4809]: E0226 14:16:07.256494 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:07 crc kubenswrapper[4809]: I0226 14:16:07.256792 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:07 crc kubenswrapper[4809]: E0226 14:16:07.256957 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:07 crc kubenswrapper[4809]: E0226 14:16:07.364357 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:08 crc kubenswrapper[4809]: I0226 14:16:08.256556 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:08 crc kubenswrapper[4809]: E0226 14:16:08.256728 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.256581 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.256659 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.256747 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.256841 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.257106 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.257864 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.647682 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.647733 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.647749 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.647772 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.647783 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:09Z","lastTransitionTime":"2026-02-26T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.668417 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:09Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.672811 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.672855 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.672866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.672886 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.672898 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:09Z","lastTransitionTime":"2026-02-26T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.716369 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:09Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.726158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.726213 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.726265 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.726289 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.726302 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:09Z","lastTransitionTime":"2026-02-26T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.745232 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:09Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.753371 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.753445 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.753504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.753545 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.753567 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:09Z","lastTransitionTime":"2026-02-26T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.772846 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:09Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.777997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.778068 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.778079 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.778097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:09 crc kubenswrapper[4809]: I0226 14:16:09.778130 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:09Z","lastTransitionTime":"2026-02-26T14:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.794971 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:09Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:09 crc kubenswrapper[4809]: E0226 14:16:09.795184 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:16:10 crc kubenswrapper[4809]: I0226 14:16:10.256303 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:10 crc kubenswrapper[4809]: E0226 14:16:10.256447 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:11 crc kubenswrapper[4809]: I0226 14:16:11.255887 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:11 crc kubenswrapper[4809]: I0226 14:16:11.255997 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:11 crc kubenswrapper[4809]: I0226 14:16:11.256059 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:11 crc kubenswrapper[4809]: E0226 14:16:11.256220 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:11 crc kubenswrapper[4809]: E0226 14:16:11.256354 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:11 crc kubenswrapper[4809]: E0226 14:16:11.256480 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.255956 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:12 crc kubenswrapper[4809]: E0226 14:16:12.256425 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.269456 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.350883 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: E0226 14:16:12.365356 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.372307 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.388643 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.403439 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.422272 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.442298 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.473580 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.489918 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.511360 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.526669 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.540336 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.554650 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.566754 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.591526 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.609313 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.626089 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.640072 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:12 crc kubenswrapper[4809]: I0226 14:16:12.651926 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:12Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:13 crc kubenswrapper[4809]: I0226 14:16:13.255646 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:13 crc kubenswrapper[4809]: I0226 14:16:13.255690 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:13 crc kubenswrapper[4809]: I0226 14:16:13.255741 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:13 crc kubenswrapper[4809]: E0226 14:16:13.255806 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:13 crc kubenswrapper[4809]: E0226 14:16:13.255884 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:13 crc kubenswrapper[4809]: E0226 14:16:13.256051 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.256463 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:14 crc kubenswrapper[4809]: E0226 14:16:14.256707 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.962101 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/0.log" Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.962167 4809 generic.go:334] "Generic (PLEG): container finished" podID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" containerID="942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6" exitCode=1 Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.962205 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerDied","Data":"942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6"} Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.962676 4809 scope.go:117] "RemoveContainer" containerID="942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6" Feb 26 14:16:14 crc kubenswrapper[4809]: I0226 14:16:14.984949 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:14Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.000642 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:14Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.016473 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.028469 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.041617 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.057421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.074595 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.090686 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.103997 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.118421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.132051 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.147247 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.159119 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.179196 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.179282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.179352 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.179395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.179441 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:19.179425549 +0000 UTC m=+217.652746072 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.179479 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.179519 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:17:19.179512082 +0000 UTC m=+217.652832605 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.179607 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.179761 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:17:19.179735408 +0000 UTC m=+217.653055931 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.194223 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.205117 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.218634 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.235127 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.251746 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.255979 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.256073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.256150 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.256207 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.256343 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.256388 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.280590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.280655 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280802 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280843 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280802 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280872 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280882 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280936 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:17:19.280918168 +0000 UTC m=+217.754238691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.280858 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:16:15 crc kubenswrapper[4809]: E0226 14:16:15.281072 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:17:19.281030842 +0000 UTC m=+217.754351515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.966929 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/0.log" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.967117 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerStarted","Data":"e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639"} Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.980873 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:15 crc kubenswrapper[4809]: I0226 14:16:15.992650 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:15Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.006088 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.023703 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.042589 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.060551 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.076565 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.090467 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.103180 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.120708 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.135447 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.148091 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.168406 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.184685 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.199879 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.214456 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.228000 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.239615 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.256420 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:16 crc kubenswrapper[4809]: E0226 14:16:16.256563 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:16 crc kubenswrapper[4809]: I0226 14:16:16.261830 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:16Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:17 crc kubenswrapper[4809]: I0226 14:16:17.256418 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:17 crc kubenswrapper[4809]: I0226 14:16:17.256507 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:17 crc kubenswrapper[4809]: I0226 14:16:17.256418 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:17 crc kubenswrapper[4809]: E0226 14:16:17.256581 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:17 crc kubenswrapper[4809]: E0226 14:16:17.256676 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:17 crc kubenswrapper[4809]: E0226 14:16:17.256763 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:17 crc kubenswrapper[4809]: E0226 14:16:17.366924 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:18 crc kubenswrapper[4809]: I0226 14:16:18.256148 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:18 crc kubenswrapper[4809]: E0226 14:16:18.256355 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.256607 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.256690 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.256655 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.256825 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.256967 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.257126 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.825707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.825766 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.825777 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.825794 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.825803 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:19Z","lastTransitionTime":"2026-02-26T14:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.842582 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:19Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.846756 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.846812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.846826 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.846845 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.846860 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:19Z","lastTransitionTime":"2026-02-26T14:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.862913 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:19Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.868028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.868074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.868088 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.868104 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.868115 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:19Z","lastTransitionTime":"2026-02-26T14:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.881328 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:19Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.886035 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.886067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.886077 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.886092 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.886106 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:19Z","lastTransitionTime":"2026-02-26T14:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.906797 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:19Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.912630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.912678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.912689 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.912709 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:19 crc kubenswrapper[4809]: I0226 14:16:19.912723 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:19Z","lastTransitionTime":"2026-02-26T14:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.925746 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:19Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:19 crc kubenswrapper[4809]: E0226 14:16:19.925945 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:16:20 crc kubenswrapper[4809]: I0226 14:16:20.256594 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:20 crc kubenswrapper[4809]: E0226 14:16:20.256834 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:20 crc kubenswrapper[4809]: I0226 14:16:20.258306 4809 scope.go:117] "RemoveContainer" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" Feb 26 14:16:20 crc kubenswrapper[4809]: I0226 14:16:20.985143 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/2.log" Feb 26 14:16:20 crc kubenswrapper[4809]: I0226 14:16:20.988180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f"} Feb 26 14:16:21 crc kubenswrapper[4809]: I0226 14:16:21.256308 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:21 crc kubenswrapper[4809]: I0226 14:16:21.256341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:21 crc kubenswrapper[4809]: I0226 14:16:21.256308 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:21 crc kubenswrapper[4809]: E0226 14:16:21.256451 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:21 crc kubenswrapper[4809]: E0226 14:16:21.256524 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:21 crc kubenswrapper[4809]: E0226 14:16:21.256751 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:21 crc kubenswrapper[4809]: I0226 14:16:21.992202 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.010869 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.024723 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.036832 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.049851 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.066479 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.078527 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.102959 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.116170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.131565 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.143940 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.160399 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.178364 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.201190 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.215638 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.230728 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.247687 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.256063 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:22 crc kubenswrapper[4809]: E0226 14:16:22.256245 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.262664 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.274212 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.285810 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.308696 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.325433 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.340002 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.350215 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.362779 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: E0226 14:16:22.367490 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.388174 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.401832 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.414460 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.426709 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.439222 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.448217 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.463106 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.477563 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.511849 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.525560 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.540351 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.552862 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.567466 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.582404 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:22Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.997232 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:16:22 crc kubenswrapper[4809]: I0226 14:16:22.998144 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/2.log" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.000663 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" exitCode=1 Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.000718 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f"} Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.000766 4809 scope.go:117] "RemoveContainer" containerID="a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.001287 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:16:23 crc kubenswrapper[4809]: E0226 14:16:23.001452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.020640 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.034507 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.052783 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a67507897dbad02e0d15a79f3aa33404661fbab5706dec099ef2d85cc1f1ea9a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:15:52Z\\\",\\\"message\\\":\\\"enshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0226 14:15:52.174382 7022 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0226 14:15:52.174388 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0226 14:15:52.174392 7022 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0226 14:15:52.174173 7022 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0226 14:15:52.174390 7022 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0226 14:15:52.174286 7022 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:22Z\\\",\\\"message\\\":\\\"7356 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-daemon per-node LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368484 7356 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-daemon template LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368435 7356 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.073400 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.095602 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.111676 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.124912 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.149239 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.168458 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.185452 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.201477 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.220766 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.243915 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.256135 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.256314 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:23 crc kubenswrapper[4809]: E0226 14:16:23.256860 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.256383 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.256346 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:23 crc kubenswrapper[4809]: E0226 14:16:23.256975 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:23 crc kubenswrapper[4809]: E0226 14:16:23.257112 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.270136 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.283335 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.296353 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.309990 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:23 crc kubenswrapper[4809]: I0226 14:16:23.324296 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:23Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.006792 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.010807 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:16:24 crc kubenswrapper[4809]: E0226 14:16:24.011036 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.023946 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.038386 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.050932 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.063546 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.076936 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.089503 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.099583 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.110292 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.118870 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.140285 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.156767 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.165738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.178318 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.189773 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.200260 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.212911 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.222902 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.232237 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.249353 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:22Z\\\",\\\"message\\\":\\\"7356 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-daemon per-node LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368484 7356 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-daemon template LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368435 7356 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:24Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:24 crc kubenswrapper[4809]: I0226 14:16:24.256248 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:24 crc kubenswrapper[4809]: E0226 14:16:24.256391 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:25 crc kubenswrapper[4809]: I0226 14:16:25.256579 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:25 crc kubenswrapper[4809]: I0226 14:16:25.256694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:25 crc kubenswrapper[4809]: E0226 14:16:25.256779 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:25 crc kubenswrapper[4809]: I0226 14:16:25.256860 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:25 crc kubenswrapper[4809]: E0226 14:16:25.256995 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:25 crc kubenswrapper[4809]: E0226 14:16:25.257177 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:26 crc kubenswrapper[4809]: I0226 14:16:26.256329 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:26 crc kubenswrapper[4809]: E0226 14:16:26.256507 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:27 crc kubenswrapper[4809]: I0226 14:16:27.255902 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:27 crc kubenswrapper[4809]: I0226 14:16:27.255973 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:27 crc kubenswrapper[4809]: E0226 14:16:27.256151 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:27 crc kubenswrapper[4809]: I0226 14:16:27.255907 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:27 crc kubenswrapper[4809]: E0226 14:16:27.256677 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:27 crc kubenswrapper[4809]: E0226 14:16:27.256569 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:27 crc kubenswrapper[4809]: E0226 14:16:27.368658 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:28 crc kubenswrapper[4809]: I0226 14:16:28.256705 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:28 crc kubenswrapper[4809]: E0226 14:16:28.256849 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:29 crc kubenswrapper[4809]: I0226 14:16:29.134250 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:29 crc kubenswrapper[4809]: E0226 14:16:29.134407 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:16:29 crc kubenswrapper[4809]: E0226 14:16:29.134471 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs podName:a8ccb95b-da48-49af-a2bf-4d10505c73ae nodeName:}" failed. No retries permitted until 2026-02-26 14:17:33.134456178 +0000 UTC m=+231.607776701 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs") pod "network-metrics-daemon-55482" (UID: "a8ccb95b-da48-49af-a2bf-4d10505c73ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 14:16:29 crc kubenswrapper[4809]: I0226 14:16:29.256593 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:29 crc kubenswrapper[4809]: I0226 14:16:29.256624 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:29 crc kubenswrapper[4809]: I0226 14:16:29.256666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:29 crc kubenswrapper[4809]: E0226 14:16:29.256757 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:29 crc kubenswrapper[4809]: E0226 14:16:29.256876 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:29 crc kubenswrapper[4809]: E0226 14:16:29.256966 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.110554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.110612 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.110627 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.110650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.110667 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:30Z","lastTransitionTime":"2026-02-26T14:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.123199 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.126485 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.126528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.126537 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.126558 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.126570 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:30Z","lastTransitionTime":"2026-02-26T14:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.139198 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.143200 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.143247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.143259 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.143276 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.143289 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:30Z","lastTransitionTime":"2026-02-26T14:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.156062 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.159825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.159890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.159909 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.159928 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.159940 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:30Z","lastTransitionTime":"2026-02-26T14:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.177748 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.181891 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.181920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.181931 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.181945 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.181956 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:30Z","lastTransitionTime":"2026-02-26T14:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.196947 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:30Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.197207 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:16:30 crc kubenswrapper[4809]: I0226 14:16:30.256490 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:30 crc kubenswrapper[4809]: E0226 14:16:30.256633 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:31 crc kubenswrapper[4809]: I0226 14:16:31.256581 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:31 crc kubenswrapper[4809]: E0226 14:16:31.257201 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:31 crc kubenswrapper[4809]: I0226 14:16:31.256617 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:31 crc kubenswrapper[4809]: I0226 14:16:31.256581 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:31 crc kubenswrapper[4809]: E0226 14:16:31.257362 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:31 crc kubenswrapper[4809]: E0226 14:16:31.257483 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.256482 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:32 crc kubenswrapper[4809]: E0226 14:16:32.256656 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.271832 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.288793 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:22Z\\\",\\\"message\\\":\\\"7356 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-daemon per-node LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368484 7356 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-daemon template LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368435 7356 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.303646 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.321815 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.333468 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.352127 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: E0226 14:16:32.370089 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.387543 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.400445 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.410648 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.421287 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.432096 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.440154 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.448767 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.457966 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.469364 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.484062 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.494530 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.506410 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:32 crc kubenswrapper[4809]: I0226 14:16:32.522238 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:33 crc kubenswrapper[4809]: I0226 14:16:33.256444 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:33 crc kubenswrapper[4809]: I0226 14:16:33.256554 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:33 crc kubenswrapper[4809]: I0226 14:16:33.256444 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:33 crc kubenswrapper[4809]: E0226 14:16:33.256589 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:33 crc kubenswrapper[4809]: E0226 14:16:33.256703 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:33 crc kubenswrapper[4809]: E0226 14:16:33.256852 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:34 crc kubenswrapper[4809]: I0226 14:16:34.256150 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:34 crc kubenswrapper[4809]: E0226 14:16:34.256300 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:35 crc kubenswrapper[4809]: I0226 14:16:35.256598 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:35 crc kubenswrapper[4809]: I0226 14:16:35.256689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:35 crc kubenswrapper[4809]: I0226 14:16:35.256715 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:35 crc kubenswrapper[4809]: E0226 14:16:35.256825 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:35 crc kubenswrapper[4809]: E0226 14:16:35.256921 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:35 crc kubenswrapper[4809]: E0226 14:16:35.257027 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:36 crc kubenswrapper[4809]: I0226 14:16:36.256304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:36 crc kubenswrapper[4809]: E0226 14:16:36.256533 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:36 crc kubenswrapper[4809]: I0226 14:16:36.257409 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:16:36 crc kubenswrapper[4809]: E0226 14:16:36.257570 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:16:37 crc kubenswrapper[4809]: I0226 14:16:37.256538 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:37 crc kubenswrapper[4809]: I0226 14:16:37.256686 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:37 crc kubenswrapper[4809]: E0226 14:16:37.256853 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:37 crc kubenswrapper[4809]: I0226 14:16:37.256887 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:37 crc kubenswrapper[4809]: E0226 14:16:37.257240 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:37 crc kubenswrapper[4809]: E0226 14:16:37.257339 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:37 crc kubenswrapper[4809]: E0226 14:16:37.371425 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:38 crc kubenswrapper[4809]: I0226 14:16:38.256098 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:38 crc kubenswrapper[4809]: E0226 14:16:38.256314 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:39 crc kubenswrapper[4809]: I0226 14:16:39.256666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:39 crc kubenswrapper[4809]: I0226 14:16:39.256719 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:39 crc kubenswrapper[4809]: I0226 14:16:39.256743 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:39 crc kubenswrapper[4809]: E0226 14:16:39.256835 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:39 crc kubenswrapper[4809]: E0226 14:16:39.256968 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:39 crc kubenswrapper[4809]: E0226 14:16:39.257148 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.255964 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.256316 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.268981 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.269074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.269089 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.269109 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.269121 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:40Z","lastTransitionTime":"2026-02-26T14:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.282344 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.286654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.286692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.286705 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.286717 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.286730 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:40Z","lastTransitionTime":"2026-02-26T14:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.299324 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.302971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.303083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.303100 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.303125 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.303140 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:40Z","lastTransitionTime":"2026-02-26T14:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.317841 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.322282 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.322321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.322333 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.322350 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.322362 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:40Z","lastTransitionTime":"2026-02-26T14:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.336956 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.340681 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.340718 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.340737 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.340750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:40 crc kubenswrapper[4809]: I0226 14:16:40.340760 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:40Z","lastTransitionTime":"2026-02-26T14:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.354911 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"174a06ad-2f49-4e47-8b01-2d4967845ee0\\\",\\\"systemUUID\\\":\\\"f486f530-323a-4284-90aa-e6ee0bb3cb0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:40 crc kubenswrapper[4809]: E0226 14:16:40.355072 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 14:16:41 crc kubenswrapper[4809]: I0226 14:16:41.256177 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:41 crc kubenswrapper[4809]: I0226 14:16:41.256217 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:41 crc kubenswrapper[4809]: E0226 14:16:41.256353 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:41 crc kubenswrapper[4809]: I0226 14:16:41.256186 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:41 crc kubenswrapper[4809]: E0226 14:16:41.256464 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:41 crc kubenswrapper[4809]: E0226 14:16:41.256549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.256412 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:42 crc kubenswrapper[4809]: E0226 14:16:42.256593 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.276488 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b747a3bc738489ebae5cd1f1979b33a7ea5648eeb68979d831034c7eff9d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f1182910b29cb31b4cf78a24cd5b813f29bb87a35b71794dfe5edd829ee315d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.292710 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.316441 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eaaa554-c5bb-455b-ad10-96f71caf4e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:22Z\\\",\\\"message\\\":\\\"7356 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-daemon per-node LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368484 7356 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-daemon template LB for network=default: []services.LB{}\\\\nI0226 14:16:22.368435 7356 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:16:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swptd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qwqmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.330417 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4387d3e87bbbfd59b8fc92612556195161beabc6d37bdad03fb8d9d2eba496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.345788 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q47rn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021874d0-ff73-40e4-97aa-2f72d648e289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb89495cf2b635f420576c3d65846908d0cebcec36a8e62573a6631d083bcdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfcafff8dcc86dd51afacdaf24c01fdeca354d7efa823edbc4d2514c21a14d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2a404d2af8bcd6face2f4b72704ed217231c6cc66b64ac109bc31c826b750d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://842373f84920771bf20189bd3d9c711ccdd7f9ac174da65623cfef67e55d597c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23488d0b9159f6a755652fd38f811e27e976573f30a355a9b6c6a46da7517825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74258b1a56455c6f0a992b8828711a7c4da19f6645eb85d83659fdab24b0bed1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b0a40115b2691b2475f503216ab439459e11ce456be07f65c39fcb6224d3d81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:15:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqfnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q47rn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.360409 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-55482" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccb95b-da48-49af-a2bf-4d10505c73ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-czznw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-55482\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: E0226 14:16:42.373003 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.373794 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee5dfae-6391-4988-900c-e8abcb031d30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd5bcc077fb39a85edcf85180b5695bc221ec276361d2676749b7848ae95fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5hgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-72xsh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.384576 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d2927e-66d0-4ad1-bf97-264961d9af4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0521beb588e94e0c07f1f9f565715e2d514d5dc9806f3dd0409ac7ebb613985f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e18da4619a1ab68ac813db90535cd02098de06958e37e8c3a79da6f18ee10b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.403609 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"608a8010-f4ce-4cc7-9621-62d5ea5b04eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db3c55142cc8d813d8703faa67dee67352501d3e56cbc3dc3ecdc91bb9dda175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b2d413a0cf3bb37308af245f51a142eb2300f2d1a85cb58e5d15cfd2a41c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aced4823dda8d0a3a5f5d6eb1756543e4e35eba985e2f4139c5884706db00515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca652d4dc3fe757acc7a753a28ed4249d06ee0e24e92a3afd23ce1ce740c290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca6a9417972b3d7b58158d728a52196239b40f7e2008df4bcae5f823a952fe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2463ddddeaf9527b1e5159284d37ceac1bb584b0e8925677e18caaf4448de0a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://290b5b68167b49961e44a32800d932227f03c63ff9fbf9e02953061231f8b803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd53742ebd78536bb028026938669d4cc82bbc7565a849db63f4d9fd91a75f7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.419582 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c7dc88-43f5-4ab4-a5e2-682aa8aefef2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0226 14:14:54.073167 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0226 14:14:54.073378 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 14:14:54.074239 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4064866295/tls.crt::/tmp/serving-cert-4064866295/tls.key\\\\\\\"\\\\nI0226 14:14:54.496691 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0226 14:14:54.499042 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0226 14:14:54.499066 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0226 14:14:54.499087 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0226 14:14:54.499091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0226 14:14:54.503934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0226 14:14:54.503992 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504001 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0226 14:14:54.504035 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0226 14:14:54.504044 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0226 14:14:54.503940 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0226 14:14:54.504050 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0226 14:14:54.504060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0226 14:14:54.506631 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:14:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.431194 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a971899-4537-424e-b53a-999fdc56e22b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a96cd1a8587909f594a0747749c8a7ef85a65bfa19f72954200cbca10add844d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad3b339c9e80d63fbe96a9d5e003179cf5ecdec327ba8c8bdd3caff148ef2d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c774b862da2c14bc595cc160db0e80b9df0ba52dd9e664bc5de90efd2dd897cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8d6a4cb2a71e6aca849027ef10b7ca3781836b61aaa53d9c06be93b29b4b572\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T14:13:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.442642 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.452834 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63dc479b16ed78006a29877943fcc8a320cd31e9b2470fe25c6567ab6f366e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.462773 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hc768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91705d-1fab-4240-8e70-b3e01e220a8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4fdd4b9f54aab0a0ba1d26955c0fd2292d43e0b210b12ab61d0f9457b49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kg4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hc768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.474289 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bbb0a2-6753-4e07-be67-8e5c4c570e1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f2a51adc717809ee43a546e4584a4e3ed9db65f58bfbd4c57c4872e247b1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fc1f90ebe74591c2f153e8b47b2bbb12f9a016669c4525cfd9057bf8c266418\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T14:14:14Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 14:13:44.323458 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 14:13:44.326137 1 observer_polling.go:159] Starting file observer\\\\nI0226 14:13:44.358651 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 14:13:44.362599 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0226 14:14:14.709999 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:14:13Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:14:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d614262d6cde1b044f1b13572f8b9fe4da59cb76eb7d1d14d64e75467771db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3b3e3df5982523ae7d283453c88b2160730468f4118512c01f9b5d1bb42d2b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:13:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.486982 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.502224 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ccvqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bca1e32-8331-4d7d-acf3-7ee31374c8bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T14:16:14Z\\\",\\\"message\\\":\\\"2026-02-26T14:15:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d\\\\n2026-02-26T14:15:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1d0150ed-b8cd-4312-a8e5-8a384f8cf71d to /host/opt/cni/bin/\\\\n2026-02-26T14:15:29Z [verbose] multus-daemon started\\\\n2026-02-26T14:15:29Z [verbose] Readiness Indicator file check\\\\n2026-02-26T14:16:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T14:15:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjr6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ccvqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.512177 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pkjv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628aecc0-f33d-45bc-a351-897a05a70dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a9237e1a47b02fbf7d48eab4c3eb4b14e0d316682da0756847829c7269adf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m6lbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pkjv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:42 crc kubenswrapper[4809]: I0226 14:16:42.523494 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ef5e93-b8e6-4ec8-b07f-841b17f321af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T14:15:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1d84ae93828d8feae7c87dd747dbc2bc83e3745c7963815461d467941e2e0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3423882b05b83d4b75d193f3f6827e9d374d025a38f5489698868466486cc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T14:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9ftq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T14:15:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vrglb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T14:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 14:16:43 crc kubenswrapper[4809]: I0226 14:16:43.256567 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:43 crc kubenswrapper[4809]: I0226 14:16:43.256764 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:43 crc kubenswrapper[4809]: E0226 14:16:43.256806 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:43 crc kubenswrapper[4809]: I0226 14:16:43.256599 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:43 crc kubenswrapper[4809]: E0226 14:16:43.257063 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:43 crc kubenswrapper[4809]: E0226 14:16:43.257209 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:44 crc kubenswrapper[4809]: I0226 14:16:44.255846 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:44 crc kubenswrapper[4809]: E0226 14:16:44.256057 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:45 crc kubenswrapper[4809]: I0226 14:16:45.256262 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:45 crc kubenswrapper[4809]: I0226 14:16:45.256262 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:45 crc kubenswrapper[4809]: E0226 14:16:45.256485 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:45 crc kubenswrapper[4809]: E0226 14:16:45.256571 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:45 crc kubenswrapper[4809]: I0226 14:16:45.256298 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:45 crc kubenswrapper[4809]: E0226 14:16:45.256652 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:46 crc kubenswrapper[4809]: I0226 14:16:46.255906 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:46 crc kubenswrapper[4809]: E0226 14:16:46.256045 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:47 crc kubenswrapper[4809]: I0226 14:16:47.256310 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:47 crc kubenswrapper[4809]: E0226 14:16:47.256442 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:47 crc kubenswrapper[4809]: I0226 14:16:47.256333 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:47 crc kubenswrapper[4809]: I0226 14:16:47.256313 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:47 crc kubenswrapper[4809]: E0226 14:16:47.256516 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:47 crc kubenswrapper[4809]: E0226 14:16:47.256679 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:47 crc kubenswrapper[4809]: E0226 14:16:47.374422 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:48 crc kubenswrapper[4809]: I0226 14:16:48.256344 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:48 crc kubenswrapper[4809]: E0226 14:16:48.256464 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:49 crc kubenswrapper[4809]: I0226 14:16:49.256417 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:49 crc kubenswrapper[4809]: E0226 14:16:49.256570 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:49 crc kubenswrapper[4809]: I0226 14:16:49.256781 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:49 crc kubenswrapper[4809]: E0226 14:16:49.256829 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:49 crc kubenswrapper[4809]: I0226 14:16:49.256971 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:49 crc kubenswrapper[4809]: E0226 14:16:49.257162 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:49 crc kubenswrapper[4809]: I0226 14:16:49.259538 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:16:49 crc kubenswrapper[4809]: E0226 14:16:49.259815 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qwqmq_openshift-ovn-kubernetes(4eaaa554-c5bb-455b-ad10-96f71caf4e26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.256357 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:50 crc kubenswrapper[4809]: E0226 14:16:50.256542 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.689971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.690026 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.690037 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.690052 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.690063 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T14:16:50Z","lastTransitionTime":"2026-02-26T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.740950 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf"] Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.741485 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.743947 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.745058 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.745071 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.745295 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.774051 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-q47rn" podStartSLOduration=130.774003018 podStartE2EDuration="2m10.774003018s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.773964257 +0000 UTC m=+189.247284790" watchObservedRunningTime="2026-02-26 14:16:50.774003018 +0000 UTC m=+189.247323541" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.800986 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=67.800933038 podStartE2EDuration="1m7.800933038s" podCreationTimestamp="2026-02-26 14:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.800450854 +0000 UTC m=+189.273771377" watchObservedRunningTime="2026-02-26 14:16:50.800933038 +0000 UTC m=+189.274253581" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.843648 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hc768" podStartSLOduration=130.843630574 podStartE2EDuration="2m10.843630574s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.843387497 +0000 UTC m=+189.316708030" watchObservedRunningTime="2026-02-26 14:16:50.843630574 +0000 UTC m=+189.316951097" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.855930 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podStartSLOduration=130.85590555 podStartE2EDuration="2m10.85590555s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.855838998 +0000 UTC m=+189.329159521" watchObservedRunningTime="2026-02-26 14:16:50.85590555 +0000 UTC m=+189.329226073" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.869173 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=54.869150183 podStartE2EDuration="54.869150183s" podCreationTimestamp="2026-02-26 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.869092201 +0000 UTC m=+189.342412744" watchObservedRunningTime="2026-02-26 14:16:50.869150183 +0000 UTC m=+189.342470706" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.895072 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=93.895050233 podStartE2EDuration="1m33.895050233s" podCreationTimestamp="2026-02-26 14:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.894429375 +0000 UTC m=+189.367749898" watchObservedRunningTime="2026-02-26 14:16:50.895050233 +0000 UTC m=+189.368370756" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.902583 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.902631 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.902654 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.902680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.902712 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.924697 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.924679681 podStartE2EDuration="1m33.924679681s" podCreationTimestamp="2026-02-26 14:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.911985674 +0000 UTC m=+189.385306197" watchObservedRunningTime="2026-02-26 14:16:50.924679681 +0000 UTC m=+189.398000204" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.934829 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pkjv8" podStartSLOduration=130.934808105 podStartE2EDuration="2m10.934808105s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.934376882 +0000 UTC m=+189.407697435" watchObservedRunningTime="2026-02-26 14:16:50.934808105 +0000 UTC m=+189.408128628" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.961825 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vrglb" podStartSLOduration=129.961801496 podStartE2EDuration="2m9.961801496s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.946261646 +0000 UTC m=+189.419582159" watchObservedRunningTime="2026-02-26 14:16:50.961801496 +0000 UTC m=+189.435122019" Feb 26 14:16:50 crc kubenswrapper[4809]: I0226 14:16:50.962550 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=59.962541368 podStartE2EDuration="59.962541368s" podCreationTimestamp="2026-02-26 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.961904269 +0000 UTC m=+189.435224792" watchObservedRunningTime="2026-02-26 14:16:50.962541368 +0000 UTC m=+189.435861891" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003555 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003628 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003678 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003797 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.003848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.004707 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.007680 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ccvqm" podStartSLOduration=131.007658054 podStartE2EDuration="2m11.007658054s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:50.990704323 +0000 UTC m=+189.464024856" watchObservedRunningTime="2026-02-26 14:16:51.007658054 +0000 UTC m=+189.480978577" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.011358 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.023471 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63aa4966-b539-47bd-8c8d-8eb50ba1d8de-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t9twf\" (UID: \"63aa4966-b539-47bd-8c8d-8eb50ba1d8de\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.053421 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" Feb 26 14:16:51 crc kubenswrapper[4809]: W0226 14:16:51.068784 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63aa4966_b539_47bd_8c8d_8eb50ba1d8de.slice/crio-44cde47d6496e06a1f6c0d758a5db5b57c1a9652e28db7e9e22c3828ab2d1e4b WatchSource:0}: Error finding container 44cde47d6496e06a1f6c0d758a5db5b57c1a9652e28db7e9e22c3828ab2d1e4b: Status 404 returned error can't find the container with id 44cde47d6496e06a1f6c0d758a5db5b57c1a9652e28db7e9e22c3828ab2d1e4b Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.107710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" event={"ID":"63aa4966-b539-47bd-8c8d-8eb50ba1d8de","Type":"ContainerStarted","Data":"44cde47d6496e06a1f6c0d758a5db5b57c1a9652e28db7e9e22c3828ab2d1e4b"} Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.256165 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.256239 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:51 crc kubenswrapper[4809]: E0226 14:16:51.256441 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.256264 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:51 crc kubenswrapper[4809]: E0226 14:16:51.256526 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:51 crc kubenswrapper[4809]: E0226 14:16:51.256559 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.315710 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 14:16:51 crc kubenswrapper[4809]: I0226 14:16:51.323086 4809 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 14:16:52 crc kubenswrapper[4809]: I0226 14:16:52.111930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" event={"ID":"63aa4966-b539-47bd-8c8d-8eb50ba1d8de","Type":"ContainerStarted","Data":"885f4c338e2589f1055f3d7513aa511b0784fc45ff269b3293d7e9670b9c1b3b"} Feb 26 14:16:52 crc kubenswrapper[4809]: I0226 14:16:52.256322 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:52 crc kubenswrapper[4809]: E0226 14:16:52.258322 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:52 crc kubenswrapper[4809]: E0226 14:16:52.375248 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:53 crc kubenswrapper[4809]: I0226 14:16:53.256465 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:53 crc kubenswrapper[4809]: I0226 14:16:53.256816 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:53 crc kubenswrapper[4809]: I0226 14:16:53.256892 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:53 crc kubenswrapper[4809]: E0226 14:16:53.256934 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:53 crc kubenswrapper[4809]: E0226 14:16:53.257206 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:53 crc kubenswrapper[4809]: E0226 14:16:53.257317 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:54 crc kubenswrapper[4809]: I0226 14:16:54.256616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:54 crc kubenswrapper[4809]: E0226 14:16:54.256793 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:55 crc kubenswrapper[4809]: I0226 14:16:55.255858 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:55 crc kubenswrapper[4809]: E0226 14:16:55.256287 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:55 crc kubenswrapper[4809]: I0226 14:16:55.255965 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:55 crc kubenswrapper[4809]: I0226 14:16:55.255907 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:55 crc kubenswrapper[4809]: E0226 14:16:55.256775 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:55 crc kubenswrapper[4809]: E0226 14:16:55.256695 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:56 crc kubenswrapper[4809]: I0226 14:16:56.255974 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:56 crc kubenswrapper[4809]: E0226 14:16:56.256171 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:57 crc kubenswrapper[4809]: I0226 14:16:57.256449 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:57 crc kubenswrapper[4809]: I0226 14:16:57.256472 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:57 crc kubenswrapper[4809]: I0226 14:16:57.256477 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:57 crc kubenswrapper[4809]: E0226 14:16:57.256842 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:16:57 crc kubenswrapper[4809]: E0226 14:16:57.257092 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:57 crc kubenswrapper[4809]: E0226 14:16:57.256967 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:57 crc kubenswrapper[4809]: E0226 14:16:57.376510 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:16:58 crc kubenswrapper[4809]: I0226 14:16:58.256528 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:16:58 crc kubenswrapper[4809]: E0226 14:16:58.257190 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:16:59 crc kubenswrapper[4809]: I0226 14:16:59.256501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:16:59 crc kubenswrapper[4809]: I0226 14:16:59.256582 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:16:59 crc kubenswrapper[4809]: I0226 14:16:59.256627 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:16:59 crc kubenswrapper[4809]: E0226 14:16:59.256888 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:16:59 crc kubenswrapper[4809]: E0226 14:16:59.256976 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:16:59 crc kubenswrapper[4809]: E0226 14:16:59.257102 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:00 crc kubenswrapper[4809]: I0226 14:17:00.255903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:00 crc kubenswrapper[4809]: E0226 14:17:00.256064 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.148664 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/1.log" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.149224 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/0.log" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.149278 4809 generic.go:334] "Generic (PLEG): container finished" podID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" containerID="e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639" exitCode=1 Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.149327 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerDied","Data":"e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639"} Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.149381 4809 scope.go:117] "RemoveContainer" containerID="942c2c3d94b46914cc52160781c6985c7b1f1b5ac164730a3507c49c3d8951c6" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.150043 4809 scope.go:117] "RemoveContainer" containerID="e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639" Feb 26 14:17:01 crc kubenswrapper[4809]: E0226 14:17:01.150273 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ccvqm_openshift-multus(9bca1e32-8331-4d7d-acf3-7ee31374c8bd)\"" pod="openshift-multus/multus-ccvqm" podUID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.170968 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t9twf" podStartSLOduration=141.170933951 podStartE2EDuration="2m21.170933951s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:16:52.125357673 +0000 UTC m=+190.598678206" watchObservedRunningTime="2026-02-26 14:17:01.170933951 +0000 UTC m=+199.644254484" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.256301 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.256326 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:01 crc kubenswrapper[4809]: I0226 14:17:01.256397 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:01 crc kubenswrapper[4809]: E0226 14:17:01.256443 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:01 crc kubenswrapper[4809]: E0226 14:17:01.256557 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:01 crc kubenswrapper[4809]: E0226 14:17:01.256650 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:02 crc kubenswrapper[4809]: I0226 14:17:02.154037 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/1.log" Feb 26 14:17:02 crc kubenswrapper[4809]: I0226 14:17:02.256581 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:02 crc kubenswrapper[4809]: E0226 14:17:02.257801 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:02 crc kubenswrapper[4809]: E0226 14:17:02.377403 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:17:03 crc kubenswrapper[4809]: I0226 14:17:03.255684 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:03 crc kubenswrapper[4809]: I0226 14:17:03.255754 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:03 crc kubenswrapper[4809]: E0226 14:17:03.255855 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:03 crc kubenswrapper[4809]: I0226 14:17:03.255767 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:03 crc kubenswrapper[4809]: E0226 14:17:03.255940 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:03 crc kubenswrapper[4809]: E0226 14:17:03.256048 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:04 crc kubenswrapper[4809]: I0226 14:17:04.255799 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:04 crc kubenswrapper[4809]: E0226 14:17:04.256213 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:04 crc kubenswrapper[4809]: I0226 14:17:04.256442 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.034816 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-55482"] Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.035287 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:05 crc kubenswrapper[4809]: E0226 14:17:05.035392 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.171268 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.177636 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerStarted","Data":"4136f2637e699c68ac367d76cfbcc0365cba0606b4c0dd697df232fe0e5c0b77"} Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.178693 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.225508 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podStartSLOduration=144.225485338 podStartE2EDuration="2m24.225485338s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:05.225177499 +0000 UTC m=+203.698498032" watchObservedRunningTime="2026-02-26 14:17:05.225485338 +0000 UTC m=+203.698805861" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.255587 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:05 crc kubenswrapper[4809]: I0226 14:17:05.255587 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:05 crc kubenswrapper[4809]: E0226 14:17:05.255790 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:05 crc kubenswrapper[4809]: E0226 14:17:05.255708 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:06 crc kubenswrapper[4809]: I0226 14:17:06.256427 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:06 crc kubenswrapper[4809]: E0226 14:17:06.256560 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:06 crc kubenswrapper[4809]: I0226 14:17:06.256687 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:06 crc kubenswrapper[4809]: E0226 14:17:06.256899 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:07 crc kubenswrapper[4809]: I0226 14:17:07.255793 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:07 crc kubenswrapper[4809]: I0226 14:17:07.255864 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:07 crc kubenswrapper[4809]: E0226 14:17:07.255950 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:07 crc kubenswrapper[4809]: E0226 14:17:07.256205 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:07 crc kubenswrapper[4809]: E0226 14:17:07.378439 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:17:08 crc kubenswrapper[4809]: I0226 14:17:08.255893 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:08 crc kubenswrapper[4809]: I0226 14:17:08.255966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:08 crc kubenswrapper[4809]: E0226 14:17:08.256069 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:08 crc kubenswrapper[4809]: E0226 14:17:08.256109 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:09 crc kubenswrapper[4809]: I0226 14:17:09.256412 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:09 crc kubenswrapper[4809]: I0226 14:17:09.256454 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:09 crc kubenswrapper[4809]: E0226 14:17:09.256566 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:09 crc kubenswrapper[4809]: E0226 14:17:09.256684 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:10 crc kubenswrapper[4809]: I0226 14:17:10.255940 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:10 crc kubenswrapper[4809]: E0226 14:17:10.256104 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:10 crc kubenswrapper[4809]: I0226 14:17:10.256186 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:10 crc kubenswrapper[4809]: E0226 14:17:10.256434 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:11 crc kubenswrapper[4809]: I0226 14:17:11.255925 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:11 crc kubenswrapper[4809]: I0226 14:17:11.255946 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:11 crc kubenswrapper[4809]: E0226 14:17:11.256091 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:11 crc kubenswrapper[4809]: E0226 14:17:11.256258 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:12 crc kubenswrapper[4809]: I0226 14:17:12.154536 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:17:12 crc kubenswrapper[4809]: I0226 14:17:12.256133 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:12 crc kubenswrapper[4809]: I0226 14:17:12.256186 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:12 crc kubenswrapper[4809]: E0226 14:17:12.257521 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:12 crc kubenswrapper[4809]: E0226 14:17:12.257660 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:12 crc kubenswrapper[4809]: E0226 14:17:12.379068 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:17:13 crc kubenswrapper[4809]: I0226 14:17:13.256371 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:13 crc kubenswrapper[4809]: E0226 14:17:13.256754 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:13 crc kubenswrapper[4809]: I0226 14:17:13.256487 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:13 crc kubenswrapper[4809]: E0226 14:17:13.257076 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:14 crc kubenswrapper[4809]: I0226 14:17:14.256094 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:14 crc kubenswrapper[4809]: E0226 14:17:14.256501 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:14 crc kubenswrapper[4809]: I0226 14:17:14.256138 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:14 crc kubenswrapper[4809]: E0226 14:17:14.256583 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:15 crc kubenswrapper[4809]: I0226 14:17:15.255945 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:15 crc kubenswrapper[4809]: E0226 14:17:15.256307 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:15 crc kubenswrapper[4809]: I0226 14:17:15.255952 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:15 crc kubenswrapper[4809]: E0226 14:17:15.256546 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:16 crc kubenswrapper[4809]: I0226 14:17:16.256207 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:16 crc kubenswrapper[4809]: E0226 14:17:16.256327 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:16 crc kubenswrapper[4809]: I0226 14:17:16.256529 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:16 crc kubenswrapper[4809]: I0226 14:17:16.256621 4809 scope.go:117] "RemoveContainer" containerID="e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639" Feb 26 14:17:16 crc kubenswrapper[4809]: E0226 14:17:16.256737 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:17 crc kubenswrapper[4809]: I0226 14:17:17.215943 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/1.log" Feb 26 14:17:17 crc kubenswrapper[4809]: I0226 14:17:17.215993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerStarted","Data":"8e8d94bb545a2efa853b4d03334e9577ab1599686436650376bb4f50567df458"} Feb 26 14:17:17 crc kubenswrapper[4809]: I0226 14:17:17.255690 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:17 crc kubenswrapper[4809]: E0226 14:17:17.255822 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:17 crc kubenswrapper[4809]: I0226 14:17:17.256064 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:17 crc kubenswrapper[4809]: E0226 14:17:17.256139 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:17 crc kubenswrapper[4809]: E0226 14:17:17.380670 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:17:18 crc kubenswrapper[4809]: I0226 14:17:18.255937 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:18 crc kubenswrapper[4809]: I0226 14:17:18.255991 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:18 crc kubenswrapper[4809]: E0226 14:17:18.256189 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:18 crc kubenswrapper[4809]: E0226 14:17:18.256320 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.183348 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.183465 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.183533 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:19:21.183504997 +0000 UTC m=+339.656825550 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.183596 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.183611 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.183668 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:19:21.183653192 +0000 UTC m=+339.656973735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.183696 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.183738 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 14:19:21.183726054 +0000 UTC m=+339.657046597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.256476 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.256586 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.256656 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.256787 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.283959 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:19 crc kubenswrapper[4809]: I0226 14:17:19.284002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284171 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284187 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284198 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284237 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284290 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284306 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284250 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 14:19:21.284233696 +0000 UTC m=+339.757554219 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:17:19 crc kubenswrapper[4809]: E0226 14:17:19.284394 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 14:19:21.28437428 +0000 UTC m=+339.757694983 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 14:17:20 crc kubenswrapper[4809]: I0226 14:17:20.256410 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:20 crc kubenswrapper[4809]: I0226 14:17:20.256421 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:20 crc kubenswrapper[4809]: E0226 14:17:20.256728 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:20 crc kubenswrapper[4809]: E0226 14:17:20.256879 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:21 crc kubenswrapper[4809]: I0226 14:17:21.255977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:21 crc kubenswrapper[4809]: E0226 14:17:21.256243 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 14:17:21 crc kubenswrapper[4809]: I0226 14:17:21.256586 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:21 crc kubenswrapper[4809]: E0226 14:17:21.256759 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 14:17:22 crc kubenswrapper[4809]: I0226 14:17:22.256214 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:22 crc kubenswrapper[4809]: I0226 14:17:22.256259 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:22 crc kubenswrapper[4809]: E0226 14:17:22.257261 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 14:17:22 crc kubenswrapper[4809]: E0226 14:17:22.257592 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-55482" podUID="a8ccb95b-da48-49af-a2bf-4d10505c73ae" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.255891 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.255923 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.258827 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.259133 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.259165 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 14:17:23 crc kubenswrapper[4809]: I0226 14:17:23.260069 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 14:17:24 crc kubenswrapper[4809]: I0226 14:17:24.256118 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:17:24 crc kubenswrapper[4809]: I0226 14:17:24.256404 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:24 crc kubenswrapper[4809]: I0226 14:17:24.259171 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 14:17:24 crc kubenswrapper[4809]: I0226 14:17:24.261507 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.541819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.584266 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5c5f4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.585152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.585515 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4vxzc"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.586635 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.586653 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.587338 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.587812 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.588341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.591395 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.591859 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.593222 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2x7w"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.593587 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.594933 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.595221 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596045 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596063 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596351 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596651 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596752 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.596958 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.604066 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.604398 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.604612 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.605095 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.605775 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.606074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.608127 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.608328 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.608497 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.611866 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.612129 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.623232 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.623779 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.623933 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.624649 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.624834 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.625006 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.625083 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.625278 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.625326 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.625648 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626052 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626101 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pdzjj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626202 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626245 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626415 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626501 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626551 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626584 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626678 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626725 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626852 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626970 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627006 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627080 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627102 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rs49n"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627151 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626240 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-config\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627336 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-node-pullsecrets\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627360 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627008 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627383 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfckt\" (UniqueName: \"kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627408 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ad35a4-2e13-46d4-9404-690ffddd919e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627430 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-client\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627451 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-image-import-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627448 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627714 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627906 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qg4\" (UniqueName: \"kubernetes.io/projected/63ad35a4-2e13-46d4-9404-690ffddd919e-kube-api-access-r8qg4\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.627965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znd6w\" (UniqueName: \"kubernetes.io/projected/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-kube-api-access-znd6w\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628007 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-serving-cert\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zjx\" (UniqueName: \"kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628111 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-images\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628147 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit-dir\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628213 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628233 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628250 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-config\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628291 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.628180 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.629271 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.629866 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a664d458-7627-417c-ad03-5665fe60d20a-serving-cert\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.629945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.629973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-encryption-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630052 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630090 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630129 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630146 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8sm6\" (UniqueName: \"kubernetes.io/projected/a664d458-7627-417c-ad03-5665fe60d20a-kube-api-access-j8sm6\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.630159 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.626981 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.631392 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5c5f4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.633218 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.633366 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.633711 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.633907 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.634006 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.634493 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.634618 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.634736 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.634993 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.635062 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.635027 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.635208 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.637508 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.641976 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.642078 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4vxzc"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.645425 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647170 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647384 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647545 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647558 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jlgsb"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647840 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.647853 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.648147 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.648176 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.648339 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.648498 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.648811 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.650499 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.651421 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.652331 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.652511 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.652554 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.652645 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.652710 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mxjxl"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.653290 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.654174 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.654499 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.659349 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.659413 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.659656 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.659803 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660040 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660146 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660187 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660336 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660430 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660517 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.660708 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661275 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661367 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661518 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661820 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.661948 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.662160 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.662276 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.665093 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.665160 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pdzjj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.666960 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.667637 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.677681 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.681992 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.682187 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.709342 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.709743 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.709930 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.710084 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.710674 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.711185 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.711421 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.712417 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.712610 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.712919 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.712997 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.713053 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.713553 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.714897 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.715689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.716283 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.717739 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.718229 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rs49n"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.718312 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.719121 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.720913 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xfdk4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.721464 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.721659 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.721807 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.723084 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4drch"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.723777 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.724580 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.725516 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.725712 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.726489 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.727106 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.728112 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.728357 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.728451 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dwhvv"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.729629 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730883 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd00ba25-7848-4991-ba14-669a11a0d349-config\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-metrics-tls\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730940 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8sm6\" (UniqueName: \"kubernetes.io/projected/a664d458-7627-417c-ad03-5665fe60d20a-kube-api-access-j8sm6\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730969 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.730996 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-config-volume\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731036 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731060 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-service-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731081 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f510a8fd-d7e5-4434-8505-884005bd90ee-metrics-tls\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731103 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd00ba25-7848-4991-ba14-669a11a0d349-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731125 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsvn\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-kube-api-access-rvsvn\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731163 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731210 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-serving-cert\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731237 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731261 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-client\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731333 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-config\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731359 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731385 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731448 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-config\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731506 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731551 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-node-pullsecrets\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731575 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731599 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfckt\" (UniqueName: \"kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731622 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-dir\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731647 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731670 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20693fe0-6e35-4ecd-ace1-4ef044206c00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.731694 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd00ba25-7848-4991-ba14-669a11a0d349-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.732006 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.732896 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.733184 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-serving-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.734844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.735703 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.736437 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.736593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.736847 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.737206 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-node-pullsecrets\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.737802 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738025 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738127 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7jw\" (UniqueName: \"kubernetes.io/projected/20693fe0-6e35-4ecd-ace1-4ef044206c00-kube-api-access-vp7jw\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738166 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.735719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738203 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ad35a4-2e13-46d4-9404-690ffddd919e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-client\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738267 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-image-import-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738304 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738339 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qg4\" (UniqueName: \"kubernetes.io/projected/63ad35a4-2e13-46d4-9404-690ffddd919e-kube-api-access-r8qg4\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738371 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-policies\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.738444 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znd6w\" (UniqueName: \"kubernetes.io/projected/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-kube-api-access-znd6w\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.739522 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-image-import-ca\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.739585 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85de89d6-b550-49ac-b2e6-ec83ae54cac8-machine-approver-tls\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740327 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740350 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740382 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-serving-cert\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740430 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zjx\" (UniqueName: \"kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740453 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/04cffd1e-6ff5-4cd3-b013-d92034639a1e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740818 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-config\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.740927 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535256-qfv5b"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.757539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f510a8fd-d7e5-4434-8505-884005bd90ee-trusted-ca\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.765982 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-etcd-client\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.766556 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-images\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.766646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bdf\" (UniqueName: \"kubernetes.io/projected/b9d62ac5-d483-4086-be8e-e1b7a784701c-kube-api-access-89bdf\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.766674 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qhdg\" (UniqueName: \"kubernetes.io/projected/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-kube-api-access-9qhdg\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.766732 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-l65jj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.766865 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767479 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767513 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-images\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767567 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767656 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit-dir\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767731 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20693fe0-6e35-4ecd-ace1-4ef044206c00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767757 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-audit-dir\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767843 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.767886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-auth-proxy-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.768713 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.768776 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-config\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.768799 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.769806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770236 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770535 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl755\" (UniqueName: \"kubernetes.io/projected/02f12e35-0b9a-4af4-ac63-2602bebcb9b0-kube-api-access-zl755\") pod \"downloads-7954f5f757-jlgsb\" (UID: \"02f12e35-0b9a-4af4-ac63-2602bebcb9b0\") " pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjh4\" (UniqueName: \"kubernetes.io/projected/85de89d6-b550-49ac-b2e6-ec83ae54cac8-kube-api-access-hhjh4\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770694 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnqn\" (UniqueName: \"kubernetes.io/projected/9bb17c48-4174-42b1-91f5-a3debbbc23c6-kube-api-access-tvnqn\") pod \"migrator-59844c95c7-zj2zg\" (UID: \"9bb17c48-4174-42b1-91f5-a3debbbc23c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770729 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhf4\" (UniqueName: \"kubernetes.io/projected/76527e96-13d7-4cc0-b245-dde49efb2786-kube-api-access-slhf4\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.770787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mbk\" (UniqueName: \"kubernetes.io/projected/04cffd1e-6ff5-4cd3-b013-d92034639a1e-kube-api-access-v8mbk\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771441 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a664d458-7627-417c-ad03-5665fe60d20a-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a664d458-7627-417c-ad03-5665fe60d20a-serving-cert\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771581 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-serving-cert\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771690 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-client\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.771721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.772275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.772358 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-encryption-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.772875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.774733 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.778530 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.780077 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.780450 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.780523 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4mt\" (UniqueName: \"kubernetes.io/projected/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-kube-api-access-8q4mt\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781177 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a664d458-7627-417c-ad03-5665fe60d20a-serving-cert\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781237 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-encryption-config\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gn5t\" (UniqueName: \"kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781435 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.781789 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.782791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.782858 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.787466 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-serving-cert\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.788927 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ad35a4-2e13-46d4-9404-690ffddd919e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.788956 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-encryption-config\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.788946 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ad35a4-2e13-46d4-9404-690ffddd919e-config\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.789425 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.790590 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cv4q"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.791824 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.793239 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.793393 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.793824 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.800439 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.804241 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.804646 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.804879 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.805332 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vm52c"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.805723 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.806520 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.808001 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.809149 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.809218 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.812764 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.813942 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mxjxl"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.815918 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.816475 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.817889 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4drch"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.818960 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2x7w"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.821105 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.821734 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.823422 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.826815 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.828193 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.830210 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.831449 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.833302 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jlgsb"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.834093 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.835189 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.836373 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.837415 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcg4s"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.838674 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-qfv5b"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.838879 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.839689 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nq6xq"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.840714 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.840726 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.841839 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-l65jj"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.842667 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.843719 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cv4q"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.844740 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vm52c"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.845678 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.846652 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.846843 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.847686 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.848612 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xfdk4"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.849593 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nq6xq"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.851733 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.853379 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcg4s"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.854690 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.855766 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kjlkr"] Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.857046 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.867333 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882381 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl755\" (UniqueName: \"kubernetes.io/projected/02f12e35-0b9a-4af4-ac63-2602bebcb9b0-kube-api-access-zl755\") pod \"downloads-7954f5f757-jlgsb\" (UID: \"02f12e35-0b9a-4af4-ac63-2602bebcb9b0\") " pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjh4\" (UniqueName: \"kubernetes.io/projected/85de89d6-b550-49ac-b2e6-ec83ae54cac8-kube-api-access-hhjh4\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnqn\" (UniqueName: \"kubernetes.io/projected/9bb17c48-4174-42b1-91f5-a3debbbc23c6-kube-api-access-tvnqn\") pod \"migrator-59844c95c7-zj2zg\" (UID: \"9bb17c48-4174-42b1-91f5-a3debbbc23c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882490 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhf4\" (UniqueName: \"kubernetes.io/projected/76527e96-13d7-4cc0-b245-dde49efb2786-kube-api-access-slhf4\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882513 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-serving-cert\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mbk\" (UniqueName: \"kubernetes.io/projected/04cffd1e-6ff5-4cd3-b013-d92034639a1e-kube-api-access-v8mbk\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-client\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882606 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882637 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4mt\" (UniqueName: \"kubernetes.io/projected/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-kube-api-access-8q4mt\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882653 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-encryption-config\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882683 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gn5t\" (UniqueName: \"kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd00ba25-7848-4991-ba14-669a11a0d349-config\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882738 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-metrics-tls\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882778 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-service-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f510a8fd-d7e5-4434-8505-884005bd90ee-metrics-tls\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882807 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-config-volume\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882821 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd00ba25-7848-4991-ba14-669a11a0d349-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882838 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsvn\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-kube-api-access-rvsvn\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-serving-cert\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882878 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882910 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-client\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882929 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-config\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882947 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882966 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.882997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-dir\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883028 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883046 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20693fe0-6e35-4ecd-ace1-4ef044206c00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883063 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd00ba25-7848-4991-ba14-669a11a0d349-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883085 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7jw\" (UniqueName: \"kubernetes.io/projected/20693fe0-6e35-4ecd-ace1-4ef044206c00-kube-api-access-vp7jw\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883111 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-policies\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883149 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85de89d6-b550-49ac-b2e6-ec83ae54cac8-machine-approver-tls\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883191 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/04cffd1e-6ff5-4cd3-b013-d92034639a1e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883209 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f510a8fd-d7e5-4434-8505-884005bd90ee-trusted-ca\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883226 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89bdf\" (UniqueName: \"kubernetes.io/projected/b9d62ac5-d483-4086-be8e-e1b7a784701c-kube-api-access-89bdf\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qhdg\" (UniqueName: \"kubernetes.io/projected/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-kube-api-access-9qhdg\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883259 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20693fe0-6e35-4ecd-ace1-4ef044206c00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-auth-proxy-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883292 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.883496 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.884054 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-config-volume\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.884210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.884581 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.884623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-dir\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.884646 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.885378 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.885457 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b9d62ac5-d483-4086-be8e-e1b7a784701c-audit-policies\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887319 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20693fe0-6e35-4ecd-ace1-4ef044206c00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887599 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887641 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-etcd-client\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85de89d6-b550-49ac-b2e6-ec83ae54cac8-auth-proxy-config\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887775 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-metrics-tls\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.887937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.888430 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.888874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/04cffd1e-6ff5-4cd3-b013-d92034639a1e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.889153 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.890185 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/85de89d6-b550-49ac-b2e6-ec83ae54cac8-machine-approver-tls\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.890277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-encryption-config\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.890532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20693fe0-6e35-4ecd-ace1-4ef044206c00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.891263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.892194 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d62ac5-d483-4086-be8e-e1b7a784701c-serving-cert\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.907434 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.926753 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.948356 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.967060 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.987700 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 14:17:31 crc kubenswrapper[4809]: I0226 14:17:31.999355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd00ba25-7848-4991-ba14-669a11a0d349-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.007673 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.011114 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd00ba25-7848-4991-ba14-669a11a0d349-config\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.027367 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.047114 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.066827 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.087154 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.107151 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.127703 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.147624 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.167524 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.186695 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.208120 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.227215 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.247577 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.267814 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.287231 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.294585 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-config\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.307731 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.314325 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.328097 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.338792 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-client\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.347737 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.356459 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76527e96-13d7-4cc0-b245-dde49efb2786-etcd-service-ca\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.367155 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.378436 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76527e96-13d7-4cc0-b245-dde49efb2786-serving-cert\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.387161 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.407273 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.427352 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.448529 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.467966 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.479388 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f510a8fd-d7e5-4434-8505-884005bd90ee-metrics-tls\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.515067 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.517619 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f510a8fd-d7e5-4434-8505-884005bd90ee-trusted-ca\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.527452 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.548329 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.567182 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 14:17:32 crc kubenswrapper[4809]: I0226 14:17:32.825442 4809 request.go:700] Waited for 1.087283516s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.014240 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.014769 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.014999 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.021386 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.021648 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.021878 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.022232 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.022494 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.022699 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.022878 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.023223 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.024074 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.024119 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.024148 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.024127 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.038434 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.054258 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.058952 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znd6w\" (UniqueName: \"kubernetes.io/projected/3d875f76-8d31-46f5-9fcc-20d2868e7c2f-kube-api-access-znd6w\") pod \"apiserver-76f77b778f-4vxzc\" (UID: \"3d875f76-8d31-46f5-9fcc-20d2868e7c2f\") " pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.063906 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qg4\" (UniqueName: \"kubernetes.io/projected/63ad35a4-2e13-46d4-9404-690ffddd919e-kube-api-access-r8qg4\") pod \"machine-api-operator-5694c8668f-5c5f4\" (UID: \"63ad35a4-2e13-46d4-9404-690ffddd919e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.070627 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.071038 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.073849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfckt\" (UniqueName: \"kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt\") pod \"route-controller-manager-6576b87f9c-pzw5h\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.091033 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zjx\" (UniqueName: \"kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx\") pod \"controller-manager-879f6c89f-vhtz4\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.091721 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.095657 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8sm6\" (UniqueName: \"kubernetes.io/projected/a664d458-7627-417c-ad03-5665fe60d20a-kube-api-access-j8sm6\") pod \"authentication-operator-69f744f599-b2x7w\" (UID: \"a664d458-7627-417c-ad03-5665fe60d20a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.106521 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.127161 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.146393 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.148463 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.154473 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.168216 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.180720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.187798 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.207809 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.220988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.226178 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.228614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a8ccb95b-da48-49af-a2bf-4d10505c73ae-metrics-certs\") pod \"network-metrics-daemon-55482\" (UID: \"a8ccb95b-da48-49af-a2bf-4d10505c73ae\") " pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.228667 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.232608 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.248673 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.268558 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.276906 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-55482" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.292943 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.308958 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.329050 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.348976 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.361968 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.377130 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.389541 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4vxzc"] Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.389753 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: W0226 14:17:33.409276 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d875f76_8d31_46f5_9fcc_20d2868e7c2f.slice/crio-62df6f56b94f6ad0c3602b683603eff8ea2ed6dc16a578163f982a6ffdc43dad WatchSource:0}: Error finding container 62df6f56b94f6ad0c3602b683603eff8ea2ed6dc16a578163f982a6ffdc43dad: Status 404 returned error can't find the container with id 62df6f56b94f6ad0c3602b683603eff8ea2ed6dc16a578163f982a6ffdc43dad Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.409717 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.427779 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.447840 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.457025 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:33 crc kubenswrapper[4809]: W0226 14:17:33.465640 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a54ce01_6b5f_4e57_8069_e5380a6e153f.slice/crio-0cd0cb086ff0cde39867104fdb651bba796e87492bab236622b9c8f5b0a5b9cb WatchSource:0}: Error finding container 0cd0cb086ff0cde39867104fdb651bba796e87492bab236622b9c8f5b0a5b9cb: Status 404 returned error can't find the container with id 0cd0cb086ff0cde39867104fdb651bba796e87492bab236622b9c8f5b0a5b9cb Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.467906 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.487673 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.507237 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.512806 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-55482"] Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.527019 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.542647 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2x7w"] Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.548038 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.573031 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.587831 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.594656 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5c5f4"] Feb 26 14:17:33 crc kubenswrapper[4809]: W0226 14:17:33.606469 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ad35a4_2e13_46d4_9404_690ffddd919e.slice/crio-6693cc40f51e46ed9312151eb10d9982516ebe590883448e3cd0354d20546b28 WatchSource:0}: Error finding container 6693cc40f51e46ed9312151eb10d9982516ebe590883448e3cd0354d20546b28: Status 404 returned error can't find the container with id 6693cc40f51e46ed9312151eb10d9982516ebe590883448e3cd0354d20546b28 Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.607446 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.628161 4809 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.647441 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.667519 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.689433 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.708119 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.727754 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.749606 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.768335 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.789265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.835088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl755\" (UniqueName: \"kubernetes.io/projected/02f12e35-0b9a-4af4-ac63-2602bebcb9b0-kube-api-access-zl755\") pod \"downloads-7954f5f757-jlgsb\" (UID: \"02f12e35-0b9a-4af4-ac63-2602bebcb9b0\") " pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.835170 4809 request.go:700] Waited for 1.952586271s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.853248 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjh4\" (UniqueName: \"kubernetes.io/projected/85de89d6-b550-49ac-b2e6-ec83ae54cac8-kube-api-access-hhjh4\") pod \"machine-approver-56656f9798-7f2nt\" (UID: \"85de89d6-b550-49ac-b2e6-ec83ae54cac8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.864500 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhf4\" (UniqueName: \"kubernetes.io/projected/76527e96-13d7-4cc0-b245-dde49efb2786-kube-api-access-slhf4\") pod \"etcd-operator-b45778765-4drch\" (UID: \"76527e96-13d7-4cc0-b245-dde49efb2786\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.880601 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mbk\" (UniqueName: \"kubernetes.io/projected/04cffd1e-6ff5-4cd3-b013-d92034639a1e-kube-api-access-v8mbk\") pod \"cluster-samples-operator-665b6dd947-ckwlv\" (UID: \"04cffd1e-6ff5-4cd3-b013-d92034639a1e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.901389 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.902394 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnqn\" (UniqueName: \"kubernetes.io/projected/9bb17c48-4174-42b1-91f5-a3debbbc23c6-kube-api-access-tvnqn\") pod \"migrator-59844c95c7-zj2zg\" (UID: \"9bb17c48-4174-42b1-91f5-a3debbbc23c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.931243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4mt\" (UniqueName: \"kubernetes.io/projected/aa3c1976-2d4b-4732-83c1-2cee83bbd3e8-kube-api-access-8q4mt\") pod \"openshift-apiserver-operator-796bbdcf4f-sbqqs\" (UID: \"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.943599 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.958046 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.963317 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd00ba25-7848-4991-ba14-669a11a0d349-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ndcmf\" (UID: \"fd00ba25-7848-4991-ba14-669a11a0d349\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:33 crc kubenswrapper[4809]: I0226 14:17:33.982264 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gn5t\" (UniqueName: \"kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t\") pod \"console-f9d7485db-c2d27\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.007365 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsvn\" (UniqueName: \"kubernetes.io/projected/f510a8fd-d7e5-4434-8505-884005bd90ee-kube-api-access-rvsvn\") pod \"ingress-operator-5b745b69d9-kw7jr\" (UID: \"f510a8fd-d7e5-4434-8505-884005bd90ee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.012081 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.018523 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.028280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89bdf\" (UniqueName: \"kubernetes.io/projected/b9d62ac5-d483-4086-be8e-e1b7a784701c-kube-api-access-89bdf\") pod \"apiserver-7bbb656c7d-7p66j\" (UID: \"b9d62ac5-d483-4086-be8e-e1b7a784701c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.042413 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.043884 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7jw\" (UniqueName: \"kubernetes.io/projected/20693fe0-6e35-4ecd-ace1-4ef044206c00-kube-api-access-vp7jw\") pod \"openshift-controller-manager-operator-756b6f6bc6-kt6nh\" (UID: \"20693fe0-6e35-4ecd-ace1-4ef044206c00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.052890 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.068373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.077232 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qhdg\" (UniqueName: \"kubernetes.io/projected/6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b-kube-api-access-9qhdg\") pod \"dns-default-mxjxl\" (UID: \"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b\") " pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:34 crc kubenswrapper[4809]: W0226 14:17:34.092927 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85de89d6_b550_49ac_b2e6_ec83ae54cac8.slice/crio-c025cd58d863872f84951948097df77638a3d9ff44ac0e2289fb3d9272d59953 WatchSource:0}: Error finding container c025cd58d863872f84951948097df77638a3d9ff44ac0e2289fb3d9272d59953: Status 404 returned error can't find the container with id c025cd58d863872f84951948097df77638a3d9ff44ac0e2289fb3d9272d59953 Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.106626 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131860 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131891 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzgxh\" (UniqueName: \"kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131958 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc4c2ace-f831-4413-b703-522b24da3a71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.131987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4c2ace-f831-4413-b703-522b24da3a71-serving-cert\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132041 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132067 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2917e129-ff3e-417c-86f3-0625613663de-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132112 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-config\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132136 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132152 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be611c-e33e-479e-ba69-f2c1ee615b74-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132168 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d87972ef-20a8-4130-b6d2-2afe3766c8bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132184 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dn2\" (UniqueName: \"kubernetes.io/projected/bc4c2ace-f831-4413-b703-522b24da3a71-kube-api-access-k6dn2\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132280 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36eaa772-a53b-4c58-9e98-fb438b1fdee4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132522 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-trusted-ca\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132794 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.132833 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdf8e15-0bb8-4200-8b1b-517382e568a4-serving-cert\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133033 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133274 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133322 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133373 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133423 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d87972ef-20a8-4130-b6d2-2afe3766c8bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfh6q\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.133782 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:34.633769021 +0000 UTC m=+233.107089544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133917 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.133991 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtq99\" (UniqueName: \"kubernetes.io/projected/cfdf8e15-0bb8-4200-8b1b-517382e568a4-kube-api-access-qtq99\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134838 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134905 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be611c-e33e-479e-ba69-f2c1ee615b74-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134925 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2917e129-ff3e-417c-86f3-0625613663de-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134949 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36eaa772-a53b-4c58-9e98-fb438b1fdee4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36eaa772-a53b-4c58-9e98-fb438b1fdee4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.134982 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2917e129-ff3e-417c-86f3-0625613663de-config\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135038 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9zt\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-kube-api-access-pf9zt\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135055 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82z8\" (UniqueName: \"kubernetes.io/projected/d0be611c-e33e-479e-ba69-f2c1ee615b74-kube-api-access-x82z8\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135088 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135104 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcps6\" (UniqueName: \"kubernetes.io/projected/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-kube-api-access-jcps6\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135145 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.135168 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-metrics-tls\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.197093 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.216350 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jlgsb"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.229239 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238611 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238830 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zpw\" (UniqueName: \"kubernetes.io/projected/fbade11b-78dc-4961-8b28-3d1493bab84c-kube-api-access-q9zpw\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238876 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-stats-auth\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238895 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238911 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238929 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzgxh\" (UniqueName: \"kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.238948 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-registration-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-default-certificate\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239059 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phhsk\" (UniqueName: \"kubernetes.io/projected/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-kube-api-access-phhsk\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239079 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cf0a064-9313-441b-9ab2-19a3b64ec281-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-tmpfs\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239133 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239148 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4c2ace-f831-4413-b703-522b24da3a71-serving-cert\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc4c2ace-f831-4413-b703-522b24da3a71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239196 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crvg\" (UniqueName: \"kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg\") pod \"auto-csr-approver-29535256-qfv5b\" (UID: \"4611e2a1-2842-4901-b49b-126b928b38f1\") " pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239238 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2917e129-ff3e-417c-86f3-0625613663de-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239312 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-config\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239326 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239342 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be611c-e33e-479e-ba69-f2c1ee615b74-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239356 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d87972ef-20a8-4130-b6d2-2afe3766c8bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6dn2\" (UniqueName: \"kubernetes.io/projected/bc4c2ace-f831-4413-b703-522b24da3a71-kube-api-access-k6dn2\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239388 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvr5\" (UniqueName: \"kubernetes.io/projected/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-kube-api-access-8tvr5\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239431 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxqb\" (UniqueName: \"kubernetes.io/projected/52765744-e1b6-4600-8037-d144a9dc61ab-kube-api-access-sbxqb\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhrh\" (UniqueName: \"kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gmxr\" (UniqueName: \"kubernetes.io/projected/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-kube-api-access-4gmxr\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239504 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7p2\" (UniqueName: \"kubernetes.io/projected/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-kube-api-access-tz7p2\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36eaa772-a53b-4c58-9e98-fb438b1fdee4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239815 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444zg\" (UniqueName: \"kubernetes.io/projected/49550fbb-c382-4ee2-9f93-fb53816fb1c7-kube-api-access-444zg\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-service-ca-bundle\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wrn\" (UniqueName: \"kubernetes.io/projected/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-kube-api-access-24wrn\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239873 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-trusted-ca\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239895 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239916 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-config\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239937 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdf8e15-0bb8-4200-8b1b-517382e568a4-serving-cert\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239958 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9qx\" (UniqueName: \"kubernetes.io/projected/ae850862-0d3f-4f13-b723-0a0a66d1bda7-kube-api-access-zh9qx\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.239974 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-mountpoint-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240022 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240067 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-webhook-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240107 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240123 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240138 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240154 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlsb7\" (UniqueName: \"kubernetes.io/projected/0b74b8aa-c615-4cbe-a08f-2781174e2596-kube-api-access-qlsb7\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240170 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6m7\" (UniqueName: \"kubernetes.io/projected/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-kube-api-access-5w6m7\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240186 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhg5\" (UniqueName: \"kubernetes.io/projected/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-kube-api-access-dxhg5\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240262 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-csi-data-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240289 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hjn\" (UniqueName: \"kubernetes.io/projected/2cfda986-0167-488b-b585-5212627c9f28-kube-api-access-24hjn\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240330 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8df6q\" (UniqueName: \"kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240351 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240368 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d87972ef-20a8-4130-b6d2-2afe3766c8bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dm4\" (UniqueName: \"kubernetes.io/projected/0cf0a064-9313-441b-9ab2-19a3b64ec281-kube-api-access-k5dm4\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240415 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-metrics-certs\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240433 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240461 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfh6q\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240476 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-plugins-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240491 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-srv-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-certs\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240523 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52765744-e1b6-4600-8037-d144a9dc61ab-proxy-tls\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240541 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240557 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtq99\" (UniqueName: \"kubernetes.io/projected/cfdf8e15-0bb8-4200-8b1b-517382e568a4-kube-api-access-qtq99\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240573 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-srv-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240598 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240614 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240631 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-serving-cert\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240670 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be611c-e33e-479e-ba69-f2c1ee615b74-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-proxy-tls\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2917e129-ff3e-417c-86f3-0625613663de-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240750 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-socket-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cfda986-0167-488b-b585-5212627c9f28-cert\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240781 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-images\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240800 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36eaa772-a53b-4c58-9e98-fb438b1fdee4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240817 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36eaa772-a53b-4c58-9e98-fb438b1fdee4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240852 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240869 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-key\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240885 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2917e129-ff3e-417c-86f3-0625613663de-config\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240901 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9zt\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-kube-api-access-pf9zt\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240918 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240958 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.240975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-cabundle\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241001 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-node-bootstrap-token\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241061 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82z8\" (UniqueName: \"kubernetes.io/projected/d0be611c-e33e-479e-ba69-f2c1ee615b74-kube-api-access-x82z8\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241116 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcps6\" (UniqueName: \"kubernetes.io/projected/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-kube-api-access-jcps6\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241134 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241151 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.241209 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-metrics-tls\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.242882 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:34.742822266 +0000 UTC m=+233.216142799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.243778 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.253979 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be611c-e33e-479e-ba69-f2c1ee615b74-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.254678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4c2ace-f831-4413-b703-522b24da3a71-serving-cert\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.255049 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc4c2ace-f831-4413-b703-522b24da3a71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.257322 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.258176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.258839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.260542 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.261751 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36eaa772-a53b-4c58-9e98-fb438b1fdee4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.263249 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.264620 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.264943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.265113 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.265229 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2917e129-ff3e-417c-86f3-0625613663de-config\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.265487 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.265841 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-config\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.265931 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf8e15-0bb8-4200-8b1b-517382e568a4-trusted-ca\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.266514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36eaa772-a53b-4c58-9e98-fb438b1fdee4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.267972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.268734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be611c-e33e-479e-ba69-f2c1ee615b74-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.270603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.270644 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d87972ef-20a8-4130-b6d2-2afe3766c8bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.274706 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.275590 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.275827 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.276369 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d87972ef-20a8-4130-b6d2-2afe3766c8bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.277969 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.278391 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.281314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2917e129-ff3e-417c-86f3-0625613663de-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.281628 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-metrics-tls\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.281648 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.282109 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.282573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfdf8e15-0bb8-4200-8b1b-517382e568a4-serving-cert\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.282849 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.290589 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzgxh\" (UniqueName: \"kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.296547 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rs49n\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.296747 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.314472 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36eaa772-a53b-4c58-9e98-fb438b1fdee4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p47ss\" (UID: \"36eaa772-a53b-4c58-9e98-fb438b1fdee4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.322967 4809 generic.go:334] "Generic (PLEG): container finished" podID="3d875f76-8d31-46f5-9fcc-20d2868e7c2f" containerID="fbad121054573c891282928ac9fd5eca08b720a47fb5326a0b038b0b5d4df5c5" exitCode=0 Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.327916 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfh6q\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.330801 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.330839 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.330854 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" event={"ID":"3d875f76-8d31-46f5-9fcc-20d2868e7c2f","Type":"ContainerDied","Data":"fbad121054573c891282928ac9fd5eca08b720a47fb5326a0b038b0b5d4df5c5"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.330880 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" event={"ID":"3d875f76-8d31-46f5-9fcc-20d2868e7c2f","Type":"ContainerStarted","Data":"62df6f56b94f6ad0c3602b683603eff8ea2ed6dc16a578163f982a6ffdc43dad"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.331066 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" event={"ID":"1a54ce01-6b5f-4e57-8069-e5380a6e153f","Type":"ContainerStarted","Data":"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.331088 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" event={"ID":"1a54ce01-6b5f-4e57-8069-e5380a6e153f","Type":"ContainerStarted","Data":"0cd0cb086ff0cde39867104fdb651bba796e87492bab236622b9c8f5b0a5b9cb"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.331603 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.335451 4809 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vhtz4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.335870 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.335497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" event={"ID":"04cffd1e-6ff5-4cd3-b013-d92034639a1e","Type":"ContainerStarted","Data":"8d59cbe0890ce3976b72f1aeb2d25a94bb846f5ee95bbd59866089e8e8f1d7f8"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341742 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444zg\" (UniqueName: \"kubernetes.io/projected/49550fbb-c382-4ee2-9f93-fb53816fb1c7-kube-api-access-444zg\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-service-ca-bundle\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341817 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wrn\" (UniqueName: \"kubernetes.io/projected/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-kube-api-access-24wrn\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341847 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-config\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9qx\" (UniqueName: \"kubernetes.io/projected/ae850862-0d3f-4f13-b723-0a0a66d1bda7-kube-api-access-zh9qx\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341897 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-mountpoint-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-webhook-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341954 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlsb7\" (UniqueName: \"kubernetes.io/projected/0b74b8aa-c615-4cbe-a08f-2781174e2596-kube-api-access-qlsb7\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341980 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.341999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342082 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6m7\" (UniqueName: \"kubernetes.io/projected/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-kube-api-access-5w6m7\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342102 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxhg5\" (UniqueName: \"kubernetes.io/projected/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-kube-api-access-dxhg5\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-csi-data-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342151 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hjn\" (UniqueName: \"kubernetes.io/projected/2cfda986-0167-488b-b585-5212627c9f28-kube-api-access-24hjn\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342178 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8df6q\" (UniqueName: \"kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342217 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-metrics-certs\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342236 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342253 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5dm4\" (UniqueName: \"kubernetes.io/projected/0cf0a064-9313-441b-9ab2-19a3b64ec281-kube-api-access-k5dm4\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-plugins-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342286 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342306 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-srv-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52765744-e1b6-4600-8037-d144a9dc61ab-proxy-tls\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342342 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-certs\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342367 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-srv-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342386 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-serving-cert\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342407 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-proxy-tls\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342431 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-socket-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342444 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cfda986-0167-488b-b585-5212627c9f28-cert\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342459 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-images\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342480 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-key\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342507 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-cabundle\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342544 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-node-bootstrap-token\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342560 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342601 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zpw\" (UniqueName: \"kubernetes.io/projected/fbade11b-78dc-4961-8b28-3d1493bab84c-kube-api-access-q9zpw\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342618 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-stats-auth\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342650 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-registration-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-default-certificate\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342690 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phhsk\" (UniqueName: \"kubernetes.io/projected/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-kube-api-access-phhsk\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342708 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cf0a064-9313-441b-9ab2-19a3b64ec281-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342724 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-tmpfs\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crvg\" (UniqueName: \"kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg\") pod \"auto-csr-approver-29535256-qfv5b\" (UID: \"4611e2a1-2842-4901-b49b-126b928b38f1\") " pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342796 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvr5\" (UniqueName: \"kubernetes.io/projected/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-kube-api-access-8tvr5\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342812 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxqb\" (UniqueName: \"kubernetes.io/projected/52765744-e1b6-4600-8037-d144a9dc61ab-kube-api-access-sbxqb\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342828 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhrh\" (UniqueName: \"kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gmxr\" (UniqueName: \"kubernetes.io/projected/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-kube-api-access-4gmxr\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342877 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7p2\" (UniqueName: \"kubernetes.io/projected/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-kube-api-access-tz7p2\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342895 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.343035 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-service-ca-bundle\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.343215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-registration-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.343232 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-socket-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.343357 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-mountpoint-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.343715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.344688 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" event={"ID":"63ad35a4-2e13-46d4-9404-690ffddd919e","Type":"ContainerStarted","Data":"c89724a4a93b0339844cd16234e8fe758dc52c4bbeb293c9d9f26100ad44841f"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.344727 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" event={"ID":"63ad35a4-2e13-46d4-9404-690ffddd919e","Type":"ContainerStarted","Data":"b78aebd04f7b2f586958766b1fefa277ad1ddb0337be4180fdf0c9547ab7a2ae"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.344737 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" event={"ID":"63ad35a4-2e13-46d4-9404-690ffddd919e","Type":"ContainerStarted","Data":"6693cc40f51e46ed9312151eb10d9982516ebe590883448e3cd0354d20546b28"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.344892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-cabundle\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.342471 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-config\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.345263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-plugins-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.345437 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:34.84542155 +0000 UTC m=+233.318742073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.346387 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-tmpfs\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.346529 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0b74b8aa-c615-4cbe-a08f-2781174e2596-csi-data-dir\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.348338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.350691 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52765744-e1b6-4600-8037-d144a9dc61ab-images\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.351328 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" event={"ID":"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2","Type":"ContainerStarted","Data":"336983f357ca66f38878973ca5b297d225544cd2b0a3a733cc2de4976aad7f7e"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.351399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" event={"ID":"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2","Type":"ContainerStarted","Data":"b15b9f77fe9c148b59aff8affa7b56be2511123974f2b2e243bdca416d1ace33"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.352536 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.352805 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.353617 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.357440 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52765744-e1b6-4600-8037-d144a9dc61ab-proxy-tls\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.357567 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-metrics-certs\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.357692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.361339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" event={"ID":"85de89d6-b550-49ac-b2e6-ec83ae54cac8","Type":"ContainerStarted","Data":"c025cd58d863872f84951948097df77638a3d9ff44ac0e2289fb3d9272d59953"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.361630 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-certs\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.361830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-proxy-tls\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.361823 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0cf0a064-9313-441b-9ab2-19a3b64ec281-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.362888 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.363695 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbade11b-78dc-4961-8b28-3d1493bab84c-srv-cert\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.364157 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-serving-cert\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.364430 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-stats-auth\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.367278 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtq99\" (UniqueName: \"kubernetes.io/projected/cfdf8e15-0bb8-4200-8b1b-517382e568a4-kube-api-access-qtq99\") pod \"console-operator-58897d9998-pdzjj\" (UID: \"cfdf8e15-0bb8-4200-8b1b-517382e568a4\") " pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.367541 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.368390 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.368791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-node-bootstrap-token\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.369359 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.369375 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae850862-0d3f-4f13-b723-0a0a66d1bda7-signing-key\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.369545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-webhook-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.369918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cfda986-0167-488b-b585-5212627c9f28-cert\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.370106 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-default-certificate\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.371224 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-55482" event={"ID":"a8ccb95b-da48-49af-a2bf-4d10505c73ae","Type":"ContainerStarted","Data":"b4ce34c40c3744a1413318abb8a11e578eacaa2f7a4343c62266dac353f98db4"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.371289 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-55482" event={"ID":"a8ccb95b-da48-49af-a2bf-4d10505c73ae","Type":"ContainerStarted","Data":"fa7564cb0433e48c116b887e42efcd68435b2fc41582a10d98ddfd3d728c0193"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.371302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-55482" event={"ID":"a8ccb95b-da48-49af-a2bf-4d10505c73ae","Type":"ContainerStarted","Data":"1e05fe002620ac109a1544858d0866f6754f3d324a2bedab6e1039a4bc139d00"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.379586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49550fbb-c382-4ee2-9f93-fb53816fb1c7-srv-cert\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.379927 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" event={"ID":"a664d458-7627-417c-ad03-5665fe60d20a","Type":"ContainerStarted","Data":"58a2f73018325c02b9daf8db02625d38d14f50b5b6bbf873dd26b410e9b8b28c"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.379977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" event={"ID":"a664d458-7627-417c-ad03-5665fe60d20a","Type":"ContainerStarted","Data":"dac675d1d68b9b1826fc2f4d5b9c75343206da567033e0755e105883b4c2b9d0"} Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.383367 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.388955 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.389294 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.389878 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.393973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2917e129-ff3e-417c-86f3-0625613663de-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cvtbt\" (UID: \"2917e129-ff3e-417c-86f3-0625613663de\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.403643 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4drch"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.420509 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcps6\" (UniqueName: \"kubernetes.io/projected/3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea-kube-api-access-jcps6\") pod \"dns-operator-744455d44c-xfdk4\" (UID: \"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea\") " pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.425333 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9zt\" (UniqueName: \"kubernetes.io/projected/d87972ef-20a8-4130-b6d2-2afe3766c8bc-kube-api-access-pf9zt\") pod \"cluster-image-registry-operator-dc59b4c8b-ccrdr\" (UID: \"d87972ef-20a8-4130-b6d2-2afe3766c8bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.443664 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82z8\" (UniqueName: \"kubernetes.io/projected/d0be611c-e33e-479e-ba69-f2c1ee615b74-kube-api-access-x82z8\") pod \"kube-storage-version-migrator-operator-b67b599dd-hnnzr\" (UID: \"d0be611c-e33e-479e-ba69-f2c1ee615b74\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.444047 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.446498 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:34.946465268 +0000 UTC m=+233.419785811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.449169 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.449517 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:34.949501188 +0000 UTC m=+233.422821711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.479311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.490798 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6dn2\" (UniqueName: \"kubernetes.io/projected/bc4c2ace-f831-4413-b703-522b24da3a71-kube-api-access-k6dn2\") pod \"openshift-config-operator-7777fb866f-4kbp5\" (UID: \"bc4c2ace-f831-4413-b703-522b24da3a71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.522038 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.533124 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444zg\" (UniqueName: \"kubernetes.io/projected/49550fbb-c382-4ee2-9f93-fb53816fb1c7-kube-api-access-444zg\") pod \"olm-operator-6b444d44fb-qpw8j\" (UID: \"49550fbb-c382-4ee2-9f93-fb53816fb1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.534518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.545368 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.552429 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.561244 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.561976 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.061900081 +0000 UTC m=+233.535220604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.579863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wrn\" (UniqueName: \"kubernetes.io/projected/f2bb0d0e-f2f7-4cd6-80d7-1361b474874f-kube-api-access-24wrn\") pod \"machine-config-controller-84d6567774-8mq5j\" (UID: \"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.584776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9qx\" (UniqueName: \"kubernetes.io/projected/ae850862-0d3f-4f13-b723-0a0a66d1bda7-kube-api-access-zh9qx\") pod \"service-ca-9c57cc56f-l65jj\" (UID: \"ae850862-0d3f-4f13-b723-0a0a66d1bda7\") " pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.600819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvr5\" (UniqueName: \"kubernetes.io/projected/a71f5fc0-296c-47c7-ae8b-63cddaa00c27-kube-api-access-8tvr5\") pod \"package-server-manager-789f6589d5-t96dj\" (UID: \"a71f5fc0-296c-47c7-ae8b-63cddaa00c27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.601264 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.616821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phhsk\" (UniqueName: \"kubernetes.io/projected/6e67602b-e831-4e88-8f32-e6fa8e2a9ab1-kube-api-access-phhsk\") pod \"multus-admission-controller-857f4d67dd-5cv4q\" (UID: \"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.619451 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.627877 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8df6q\" (UniqueName: \"kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q\") pod \"collect-profiles-29535255-l6b9h\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.628446 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.632521 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" Feb 26 14:17:34 crc kubenswrapper[4809]: W0226 14:17:34.641192 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d62ac5_d483_4086_be8e_e1b7a784701c.slice/crio-e5dfc04693ef1f01f8f9b1990738e86551db21222946159f0d2c004863c34056 WatchSource:0}: Error finding container e5dfc04693ef1f01f8f9b1990738e86551db21222946159f0d2c004863c34056: Status 404 returned error can't find the container with id e5dfc04693ef1f01f8f9b1990738e86551db21222946159f0d2c004863c34056 Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.649299 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6m7\" (UniqueName: \"kubernetes.io/projected/420d9fa3-a7e7-4ddf-8f30-70a56496e0e1-kube-api-access-5w6m7\") pod \"router-default-5444994796-dwhvv\" (UID: \"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1\") " pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.663917 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.664432 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.164414492 +0000 UTC m=+233.637735015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.670325 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.678250 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.679430 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gmxr\" (UniqueName: \"kubernetes.io/projected/b9e6c990-2800-4d38-9d14-a29e41ea8f3a-kube-api-access-4gmxr\") pod \"machine-config-server-kjlkr\" (UID: \"b9e6c990-2800-4d38-9d14-a29e41ea8f3a\") " pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.695617 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.696709 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crvg\" (UniqueName: \"kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg\") pod \"auto-csr-approver-29535256-qfv5b\" (UID: \"4611e2a1-2842-4901-b49b-126b928b38f1\") " pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.700044 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.707282 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.712005 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxhg5\" (UniqueName: \"kubernetes.io/projected/1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4-kube-api-access-dxhg5\") pod \"packageserver-d55dfcdfc-qnldf\" (UID: \"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.714523 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.730303 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxqb\" (UniqueName: \"kubernetes.io/projected/52765744-e1b6-4600-8037-d144a9dc61ab-kube-api-access-sbxqb\") pod \"machine-config-operator-74547568cd-bn62q\" (UID: \"52765744-e1b6-4600-8037-d144a9dc61ab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.734216 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.742339 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.752372 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.762336 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhrh\" (UniqueName: \"kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh\") pod \"marketplace-operator-79b997595-v58kd\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:34 crc kubenswrapper[4809]: W0226 14:17:34.763480 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa3c1976_2d4b_4732_83c1_2cee83bbd3e8.slice/crio-e8371c001f41f1d85a6b4ea42b4b41788aae199cd59a67bad5fe6834d8d70e0f WatchSource:0}: Error finding container e8371c001f41f1d85a6b4ea42b4b41788aae199cd59a67bad5fe6834d8d70e0f: Status 404 returned error can't find the container with id e8371c001f41f1d85a6b4ea42b4b41788aae199cd59a67bad5fe6834d8d70e0f Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.766402 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.768398 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.268373067 +0000 UTC m=+233.741693590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.772256 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.785066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlsb7\" (UniqueName: \"kubernetes.io/projected/0b74b8aa-c615-4cbe-a08f-2781174e2596-kube-api-access-qlsb7\") pod \"csi-hostpathplugin-dcg4s\" (UID: \"0b74b8aa-c615-4cbe-a08f-2781174e2596\") " pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.806078 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zpw\" (UniqueName: \"kubernetes.io/projected/fbade11b-78dc-4961-8b28-3d1493bab84c-kube-api-access-q9zpw\") pod \"catalog-operator-68c6474976-qt4lx\" (UID: \"fbade11b-78dc-4961-8b28-3d1493bab84c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.809971 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hjn\" (UniqueName: \"kubernetes.io/projected/2cfda986-0167-488b-b585-5212627c9f28-kube-api-access-24hjn\") pod \"ingress-canary-nq6xq\" (UID: \"2cfda986-0167-488b-b585-5212627c9f28\") " pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.815598 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.829578 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nq6xq" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.833348 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5dm4\" (UniqueName: \"kubernetes.io/projected/0cf0a064-9313-441b-9ab2-19a3b64ec281-kube-api-access-k5dm4\") pod \"control-plane-machine-set-operator-78cbb6b69f-gpx9n\" (UID: \"0cf0a064-9313-441b-9ab2-19a3b64ec281\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.836081 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjlkr" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.848586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7p2\" (UniqueName: \"kubernetes.io/projected/47b66962-ad36-46e0-be47-cfc6cdd6bcc0-kube-api-access-tz7p2\") pod \"service-ca-operator-777779d784-vm52c\" (UID: \"47b66962-ad36-46e0-be47-cfc6cdd6bcc0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.869264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.869672 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.369659492 +0000 UTC m=+233.842980015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.970969 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:34 crc kubenswrapper[4809]: E0226 14:17:34.971456 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.471432802 +0000 UTC m=+233.944753325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.981002 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.988514 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.991804 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss"] Feb 26 14:17:34 crc kubenswrapper[4809]: I0226 14:17:34.994803 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.010540 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pdzjj"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.012647 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mxjxl"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.056225 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.063056 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.072950 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.074167 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.574152429 +0000 UTC m=+234.047472952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.078102 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.087356 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.096511 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.174711 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.175195 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.675169647 +0000 UTC m=+234.148490180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.177950 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.226691 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.264648 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-55482" podStartSLOduration=175.264610393 podStartE2EDuration="2m55.264610393s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:35.240044659 +0000 UTC m=+233.713365182" watchObservedRunningTime="2026-02-26 14:17:35.264610393 +0000 UTC m=+233.737930916" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.278699 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.279343 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.779328877 +0000 UTC m=+234.252649400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.380368 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.381246 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.88122453 +0000 UTC m=+234.354545053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.406728 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" event={"ID":"85de89d6-b550-49ac-b2e6-ec83ae54cac8","Type":"ContainerStarted","Data":"bc2bab53814034daf1c6e350aee4250023906285c32acd046c32b192835f626a"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.409347 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" podStartSLOduration=174.409329399 podStartE2EDuration="2m54.409329399s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:35.395631875 +0000 UTC m=+233.868952398" watchObservedRunningTime="2026-02-26 14:17:35.409329399 +0000 UTC m=+233.882649922" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.423172 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" event={"ID":"fd00ba25-7848-4991-ba14-669a11a0d349","Type":"ContainerStarted","Data":"f11ab9b1b68ccbc40e636c8f878152e7c803ee8712006d8558df5f2286fb9ff3"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.423527 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" event={"ID":"fd00ba25-7848-4991-ba14-669a11a0d349","Type":"ContainerStarted","Data":"45c65aa33bc33434e52e674f419f0191d5850f68dd50fd6b3ada4084b98f476d"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.427807 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rs49n"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.448913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" event={"ID":"d87972ef-20a8-4130-b6d2-2afe3766c8bc","Type":"ContainerStarted","Data":"30e2a3a7c5de3f52051f0fc0307ffa9f0f705c121ffe35ebe03645356cdd6c4e"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.458127 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.482469 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.484913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jlgsb" event={"ID":"02f12e35-0b9a-4af4-ac63-2602bebcb9b0","Type":"ContainerStarted","Data":"4fb6503b4cabcfa971c0fe23f21a676e41c053c256b0603d154810a368f1b3cd"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.484964 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jlgsb" event={"ID":"02f12e35-0b9a-4af4-ac63-2602bebcb9b0","Type":"ContainerStarted","Data":"9adc1b7dca1fd69aaf03bd463dbd3a7a6c43b3f45e72b3b0c895f191ccfa85c0"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.487974 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.490668 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:35.990612694 +0000 UTC m=+234.463933217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.520752 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.520854 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.534787 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" event={"ID":"04cffd1e-6ff5-4cd3-b013-d92034639a1e","Type":"ContainerStarted","Data":"44872c1dfca1188a65e0e0631425833ad45759fa0a8547ac50304bf1f5af047b"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.536717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" event={"ID":"20693fe0-6e35-4ecd-ace1-4ef044206c00","Type":"ContainerStarted","Data":"15dd98c4d77d9bf1e5a01e1f30e952d6ebea998665d095186c6b30d517dec105"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.554994 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xfdk4"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.566663 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" event={"ID":"36eaa772-a53b-4c58-9e98-fb438b1fdee4","Type":"ContainerStarted","Data":"64f62489889a1365379efd7ec0ecaa50343b717ae2aeda098862442f1dfe0024"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.583354 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.584735 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.084716088 +0000 UTC m=+234.558036611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.585032 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" event={"ID":"9bb17c48-4174-42b1-91f5-a3debbbc23c6","Type":"ContainerStarted","Data":"d28a8ebeacb63f9e7bd8a1bb58cfb8c91d45aab22f1263613cc6bacefad438e8"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.585072 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" event={"ID":"9bb17c48-4174-42b1-91f5-a3debbbc23c6","Type":"ContainerStarted","Data":"f213a54179eff74659da51bea11c945c0aa879701444682f5d24df5412099583"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.587896 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mxjxl" event={"ID":"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b","Type":"ContainerStarted","Data":"dcdb9ccdadd631a6dd20b7b0350c399bddd5e37571e8bd94f4cc3e43002980d3"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.596870 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" event={"ID":"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8","Type":"ContainerStarted","Data":"e8371c001f41f1d85a6b4ea42b4b41788aae199cd59a67bad5fe6834d8d70e0f"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.610235 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerStarted","Data":"f55bb9b6d251f439a67b49c566942e8cb813bb11ae4c33fa39f7843c1f2452fb"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.611594 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c2d27" event={"ID":"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74","Type":"ContainerStarted","Data":"13b0be33912e838b93e9b5d3268309d401a2ccbcd360c8bfe18acfbcf815c1a6"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.613535 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" event={"ID":"76527e96-13d7-4cc0-b245-dde49efb2786","Type":"ContainerStarted","Data":"33ec22d62060187ee0f395884b301290371f7314f97c9c902937cb71b244fdd7"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.613568 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" event={"ID":"76527e96-13d7-4cc0-b245-dde49efb2786","Type":"ContainerStarted","Data":"0c62434051ebb40051bf4471d892ee9d11f8d0354157603fb0f28c8837c1423f"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.614589 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" event={"ID":"b9d62ac5-d483-4086-be8e-e1b7a784701c","Type":"ContainerStarted","Data":"e5dfc04693ef1f01f8f9b1990738e86551db21222946159f0d2c004863c34056"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.623266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" event={"ID":"f510a8fd-d7e5-4434-8505-884005bd90ee","Type":"ContainerStarted","Data":"c14fe93c212610e2c232cea62307b3f1cb99c8aa2d134f99d821e7261d59d0b7"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.623331 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" event={"ID":"f510a8fd-d7e5-4434-8505-884005bd90ee","Type":"ContainerStarted","Data":"1522f3348c0aff47c693c66e9d8cf1bcef5ed2fd843dd1cbe762010051b771d5"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.629818 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr"] Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.636397 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dwhvv" event={"ID":"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1","Type":"ContainerStarted","Data":"756239e7b6c589a1d06ec3da831d75f168c3c5ecf4afd83a23aaee6e4389b86a"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.645883 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" event={"ID":"cfdf8e15-0bb8-4200-8b1b-517382e568a4","Type":"ContainerStarted","Data":"3cfeab171081e347b6b9b9c93aa9267fbfacfe183af35fe12af15833821a752a"} Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.653802 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.685458 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.685832 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.185817628 +0000 UTC m=+234.659138151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.786868 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.787713 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.28767612 +0000 UTC m=+234.760996633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.787941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.791535 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.291518504 +0000 UTC m=+234.764839027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.895416 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.896238 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.39621692 +0000 UTC m=+234.869537443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:35 crc kubenswrapper[4809]: I0226 14:17:35.997439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:35 crc kubenswrapper[4809]: E0226 14:17:35.998391 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.498375911 +0000 UTC m=+234.971696434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.101027 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.101484 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.601461979 +0000 UTC m=+235.074782502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.204317 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.204823 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.704805804 +0000 UTC m=+235.178126327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.305336 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.306026 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.805991717 +0000 UTC m=+235.279312230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.411409 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.417506 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:36.917461942 +0000 UTC m=+235.390782465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.419923 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" podStartSLOduration=176.419902694 podStartE2EDuration="2m56.419902694s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.418128662 +0000 UTC m=+234.891449205" watchObservedRunningTime="2026-02-26 14:17:36.419902694 +0000 UTC m=+234.893223217" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.457180 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" podStartSLOduration=176.457056569 podStartE2EDuration="2m56.457056569s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.449442365 +0000 UTC m=+234.922762908" watchObservedRunningTime="2026-02-26 14:17:36.457056569 +0000 UTC m=+234.930377092" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.512203 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.519777 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.018003076 +0000 UTC m=+235.491323589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.522161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.522924 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.02290324 +0000 UTC m=+235.496223763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.631479 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.631889 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.131865712 +0000 UTC m=+235.605186235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.699917 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5c5f4" podStartSLOduration=175.699900217 podStartE2EDuration="2m55.699900217s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.698714542 +0000 UTC m=+235.172035085" watchObservedRunningTime="2026-02-26 14:17:36.699900217 +0000 UTC m=+235.173220740" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.710572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" event={"ID":"f510a8fd-d7e5-4434-8505-884005bd90ee","Type":"ContainerStarted","Data":"d56b33d60e547e60d172623f761ad5d8f554cc097892347ff81c6bf7aa7a7028"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.730555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" event={"ID":"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea","Type":"ContainerStarted","Data":"c7168ead86ae6af1ddbcb54dda209fa4466a07cae1049b8451985ce31e8b5cec"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.738773 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.739140 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.239126833 +0000 UTC m=+235.712447356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.746582 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" event={"ID":"04cffd1e-6ff5-4cd3-b013-d92034639a1e","Type":"ContainerStarted","Data":"80fd90eab6c6cf15107497cddffc30170af71ea0dff42dcb8a5b49dda1dce005"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.758836 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" event={"ID":"a71f5fc0-296c-47c7-ae8b-63cddaa00c27","Type":"ContainerStarted","Data":"5461535a741edb9abe38c10b18780ba25d89c352df0643f5f9a990d2a4462c1c"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.766540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" event={"ID":"3d875f76-8d31-46f5-9fcc-20d2868e7c2f","Type":"ContainerStarted","Data":"aebc320f6c74480e7965d52b4090ee672f864f391692635ef86ba7162d70c8f6"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.775869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" event={"ID":"aa3c1976-2d4b-4732-83c1-2cee83bbd3e8","Type":"ContainerStarted","Data":"fee6bd825fb1011cf935c714ff2aa3a584eb07c7bd5220077b29166c59f18285"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.791436 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" event={"ID":"420b577e-f310-4cc8-bc79-a2abcb837bbe","Type":"ContainerStarted","Data":"a6f4470ea3f13d2f319d82281df9013458392161ffd98dfc65a8a08acd27f5fd"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.829337 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" event={"ID":"d0be611c-e33e-479e-ba69-f2c1ee615b74","Type":"ContainerStarted","Data":"b6c61d915a20b805e19aac8abd3c70cfd0f37cbe0efaa9d8cc500e2ca0c7e0fe"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.840215 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.842638 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.342589853 +0000 UTC m=+235.815910376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.858486 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ndcmf" podStartSLOduration=175.858461861 podStartE2EDuration="2m55.858461861s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.794113924 +0000 UTC m=+235.267434447" watchObservedRunningTime="2026-02-26 14:17:36.858461861 +0000 UTC m=+235.331782384" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.867787 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjlkr" event={"ID":"b9e6c990-2800-4d38-9d14-a29e41ea8f3a","Type":"ContainerStarted","Data":"f79720278772d5dba354217ca502bb83c6907871388b58173d747df95d6276ad"} Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.870817 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.870878 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.915976 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kw7jr" podStartSLOduration=175.915954985 podStartE2EDuration="2m55.915954985s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.875544704 +0000 UTC m=+235.348865227" watchObservedRunningTime="2026-02-26 14:17:36.915954985 +0000 UTC m=+235.389275508" Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.941730 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:36 crc kubenswrapper[4809]: E0226 14:17:36.944394 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.444375463 +0000 UTC m=+235.917695986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:36 crc kubenswrapper[4809]: I0226 14:17:36.967756 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ckwlv" podStartSLOduration=176.967726401 podStartE2EDuration="2m56.967726401s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:36.958989174 +0000 UTC m=+235.432309697" watchObservedRunningTime="2026-02-26 14:17:36.967726401 +0000 UTC m=+235.441046934" Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.019851 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.033545 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-l65jj"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.043709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.045061 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.54504197 +0000 UTC m=+236.018362493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.060070 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4drch" podStartSLOduration=176.060048972 podStartE2EDuration="2m56.060048972s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:37.057767965 +0000 UTC m=+235.531088488" watchObservedRunningTime="2026-02-26 14:17:37.060048972 +0000 UTC m=+235.533369495" Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.067565 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nq6xq"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.091838 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jlgsb" podStartSLOduration=177.091814299 podStartE2EDuration="2m57.091814299s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:37.08169547 +0000 UTC m=+235.555015993" watchObservedRunningTime="2026-02-26 14:17:37.091814299 +0000 UTC m=+235.565134822" Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.109446 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sbqqs" podStartSLOduration=177.109428678 podStartE2EDuration="2m57.109428678s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:37.099494465 +0000 UTC m=+235.572814988" watchObservedRunningTime="2026-02-26 14:17:37.109428678 +0000 UTC m=+235.582749201" Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.117073 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.140756 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.158122 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.159077 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.659055761 +0000 UTC m=+236.132376284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.180406 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.192691 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.260939 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.261674 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.761647994 +0000 UTC m=+236.234968517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.261704 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vm52c"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.315646 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:17:37 crc kubenswrapper[4809]: W0226 14:17:37.349223 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b66962_ad36_46e0_be47_cfc6cdd6bcc0.slice/crio-b1e74525542eb8368e1d16c22592bcc66c79233065e2437cc3d17665e5c14441 WatchSource:0}: Error finding container b1e74525542eb8368e1d16c22592bcc66c79233065e2437cc3d17665e5c14441: Status 404 returned error can't find the container with id b1e74525542eb8368e1d16c22592bcc66c79233065e2437cc3d17665e5c14441 Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.359992 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-qfv5b"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.364886 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.365233 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.865221477 +0000 UTC m=+236.338542000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.375178 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcg4s"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.382078 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.384489 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.424537 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5cv4q"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.425732 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q"] Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.433588 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.466310 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.466741 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.966722069 +0000 UTC m=+236.440042592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.466964 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.467309 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:37.967300986 +0000 UTC m=+236.440621509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.570913 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.571346 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.071323852 +0000 UTC m=+236.544644375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.674591 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.675495 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.175478342 +0000 UTC m=+236.648798865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.776296 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.776703 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.276679735 +0000 UTC m=+236.750000258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.880823 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.881198 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.381185935 +0000 UTC m=+236.854506458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.887523 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" event={"ID":"d87972ef-20a8-4130-b6d2-2afe3766c8bc","Type":"ContainerStarted","Data":"33814fdd86e4682da601b41e8f8ec054d5cd5046ec08722e7d54a4b4f055910a"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.900971 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" event={"ID":"49550fbb-c382-4ee2-9f93-fb53816fb1c7","Type":"ContainerStarted","Data":"da3f8ba5b490258d103c27dca6081766fa5e809c5c2a59b330e5fcd5516bd627"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.910910 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" event={"ID":"47b66962-ad36-46e0-be47-cfc6cdd6bcc0","Type":"ContainerStarted","Data":"b1e74525542eb8368e1d16c22592bcc66c79233065e2437cc3d17665e5c14441"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.916651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" event={"ID":"52765744-e1b6-4600-8037-d144a9dc61ab","Type":"ContainerStarted","Data":"c15283055902381ad09797394a100265d7224ed34d51c71a6dc041992de4e658"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.942984 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ccrdr" podStartSLOduration=177.942948666 podStartE2EDuration="2m57.942948666s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:37.933129406 +0000 UTC m=+236.406449939" watchObservedRunningTime="2026-02-26 14:17:37.942948666 +0000 UTC m=+236.416269189" Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.955410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" event={"ID":"3d875f76-8d31-46f5-9fcc-20d2868e7c2f","Type":"ContainerStarted","Data":"2bff849544a97cbd4fa2aa08b2e059f4a865499659e198990b18cd50f9959761"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.966435 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" event={"ID":"27674e3b-1fb9-4e3a-83d9-2b77ccd40571","Type":"ContainerStarted","Data":"69cb1271f674d26f8a00e561cd3563d9a92323f9fb827ab1fedb084eeffa86cd"} Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.982040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.982226 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.482195043 +0000 UTC m=+236.955515556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.982419 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:37 crc kubenswrapper[4809]: E0226 14:17:37.983501 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.483485851 +0000 UTC m=+236.956806604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:37 crc kubenswrapper[4809]: I0226 14:17:37.993982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mxjxl" event={"ID":"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b","Type":"ContainerStarted","Data":"6f70c973483266eaba8db26b65516bd4de8c3a13fd8f7caa7dc2cffb5f82a9b2"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.000521 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" podStartSLOduration=178.000495182 podStartE2EDuration="2m58.000495182s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:37.999056479 +0000 UTC m=+236.472377002" watchObservedRunningTime="2026-02-26 14:17:38.000495182 +0000 UTC m=+236.473815715" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.014789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" event={"ID":"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1","Type":"ContainerStarted","Data":"d5c5203f7b79ebb836b0c51ede2412c6415ad78a95ab2a3698c69fa9887b6047"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.025744 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c2d27" event={"ID":"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74","Type":"ContainerStarted","Data":"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.040314 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" podStartSLOduration=177.040289315 podStartE2EDuration="2m57.040289315s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.035635658 +0000 UTC m=+236.508956181" watchObservedRunningTime="2026-02-26 14:17:38.040289315 +0000 UTC m=+236.513609838" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.044410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" event={"ID":"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f","Type":"ContainerStarted","Data":"8b80ce6e44432352055f7d5b1b28f506d85088a5bfd70db54ea7263a3ca0ae7b"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.067556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjlkr" event={"ID":"b9e6c990-2800-4d38-9d14-a29e41ea8f3a","Type":"ContainerStarted","Data":"7dd51720173a670681774484eddbd2f3cc61b87d0752ad4ed7dcfe5f5d85939c"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.084118 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.085431 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.585389574 +0000 UTC m=+237.058710097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.117182 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c2d27" podStartSLOduration=178.117164481 podStartE2EDuration="2m58.117164481s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.076160112 +0000 UTC m=+236.549480645" watchObservedRunningTime="2026-02-26 14:17:38.117164481 +0000 UTC m=+236.590485004" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.118766 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kjlkr" podStartSLOduration=7.118758828 podStartE2EDuration="7.118758828s" podCreationTimestamp="2026-02-26 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.113443231 +0000 UTC m=+236.586763754" watchObservedRunningTime="2026-02-26 14:17:38.118758828 +0000 UTC m=+236.592079351" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.155979 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.156273 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.158580 4809 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4vxzc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.158644 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" podUID="3d875f76-8d31-46f5-9fcc-20d2868e7c2f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.175203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerStarted","Data":"922b0cd06b07abb44f0268e616d5b2f854c8690898125d6e47c32003d9aa177c"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.182395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" event={"ID":"ae850862-0d3f-4f13-b723-0a0a66d1bda7","Type":"ContainerStarted","Data":"1f95c96f34b87dafdc1427bee07d334db68fda370e5800433d86e1c77a104f42"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.190439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.192455 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.692412419 +0000 UTC m=+237.165732942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.199154 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" event={"ID":"20693fe0-6e35-4ecd-ace1-4ef044206c00","Type":"ContainerStarted","Data":"39368038f56ef1a73bb429e9620c1ca06536e0b2bae733ac1c940fc3f0981e80"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.208782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" event={"ID":"c2048c3a-d91f-4ef5-93e1-41a621001c94","Type":"ContainerStarted","Data":"28c13921e890cc56643fa4c798d466098b923a6ca8da726e714a67073a6db8fd"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.212657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" event={"ID":"0b74b8aa-c615-4cbe-a08f-2781174e2596","Type":"ContainerStarted","Data":"1f51c2179822a383b58832e43b96d6b755bab4f12fae2ea26eb9743e03461241"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.251997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dwhvv" event={"ID":"420d9fa3-a7e7-4ddf-8f30-70a56496e0e1","Type":"ContainerStarted","Data":"9fb0e4aa081626a21fb5d7ca188db4fc097c07b27abb461c9c28f242e242cece"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.281214 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" podStartSLOduration=158.281196336 podStartE2EDuration="2m38.281196336s" podCreationTimestamp="2026-02-26 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.280911157 +0000 UTC m=+236.754231680" watchObservedRunningTime="2026-02-26 14:17:38.281196336 +0000 UTC m=+236.754516859" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.282737 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kt6nh" podStartSLOduration=178.282730161 podStartE2EDuration="2m58.282730161s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.248183093 +0000 UTC m=+236.721503616" watchObservedRunningTime="2026-02-26 14:17:38.282730161 +0000 UTC m=+236.756050684" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.292308 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" event={"ID":"a71f5fc0-296c-47c7-ae8b-63cddaa00c27","Type":"ContainerStarted","Data":"8e19595c0722112aeca9b3be647bd6660f7c8e698991fe96a754119db7e07c22"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.292703 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.293779 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.793755536 +0000 UTC m=+237.267076199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.303822 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nq6xq" event={"ID":"2cfda986-0167-488b-b585-5212627c9f28","Type":"ContainerStarted","Data":"c0bb375370de58bd607cfa8db0e162e1c59536616820869af6d8b5b0c5674977"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.312650 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" podStartSLOduration=177.312629292 podStartE2EDuration="2m57.312629292s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.310983794 +0000 UTC m=+236.784304317" watchObservedRunningTime="2026-02-26 14:17:38.312629292 +0000 UTC m=+236.785949815" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.327245 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" event={"ID":"9bb17c48-4174-42b1-91f5-a3debbbc23c6","Type":"ContainerStarted","Data":"9b5737bd7e0c768a54c2dba05aa1dd033d156ed5d7d30fb4bb215b055af68da3"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.334432 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.343449 4809 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rs49n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.343507 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.349760 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" event={"ID":"2917e129-ff3e-417c-86f3-0625613663de","Type":"ContainerStarted","Data":"1c85c4f763ca076be8b5e8bbd251fda35488580cc26b857fb6f5f68d249c99ee"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.379112 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dwhvv" podStartSLOduration=177.379087051 podStartE2EDuration="2m57.379087051s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.35530681 +0000 UTC m=+236.828627343" watchObservedRunningTime="2026-02-26 14:17:38.379087051 +0000 UTC m=+236.852407574" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.395682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.398617 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.898603876 +0000 UTC m=+237.371924399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.408424 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" event={"ID":"cfdf8e15-0bb8-4200-8b1b-517382e568a4","Type":"ContainerStarted","Data":"5ae799f25356aa2fced6160d023a474970271749eb85e92ff4bf5ad90ae60bbf"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.409527 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.412598 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.412648 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.415959 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nq6xq" podStartSLOduration=7.415931237 podStartE2EDuration="7.415931237s" podCreationTimestamp="2026-02-26 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.381368398 +0000 UTC m=+236.854688921" watchObservedRunningTime="2026-02-26 14:17:38.415931237 +0000 UTC m=+236.889251850" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.416915 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zj2zg" podStartSLOduration=177.416908426 podStartE2EDuration="2m57.416908426s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.409744994 +0000 UTC m=+236.883065517" watchObservedRunningTime="2026-02-26 14:17:38.416908426 +0000 UTC m=+236.890228949" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.449393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" event={"ID":"fbade11b-78dc-4961-8b28-3d1493bab84c","Type":"ContainerStarted","Data":"480d8684e585f237cb2b0c1a969e97c5b8d38d470543c6a85a7a1d8b21d1fa87"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.450280 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.458712 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" podStartSLOduration=178.458692647 podStartE2EDuration="2m58.458692647s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.458504962 +0000 UTC m=+236.931825485" watchObservedRunningTime="2026-02-26 14:17:38.458692647 +0000 UTC m=+236.932013170" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.460318 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.460358 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.474215 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" event={"ID":"36eaa772-a53b-4c58-9e98-fb438b1fdee4","Type":"ContainerStarted","Data":"e41309e5327cd57d157a960d01d5fb922fc98bc6e8132b2363316215eca30ea4"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.496870 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" podStartSLOduration=177.496847022 podStartE2EDuration="2m57.496847022s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.495931155 +0000 UTC m=+236.969251688" watchObservedRunningTime="2026-02-26 14:17:38.496847022 +0000 UTC m=+236.970167545" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.496921 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.498348 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:38.998324595 +0000 UTC m=+237.471645138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.504926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" event={"ID":"4611e2a1-2842-4901-b49b-126b928b38f1","Type":"ContainerStarted","Data":"939a954417be136ae49aec9a088fe124b89c90464bd2a803d25004414f5af299"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.561939 4809 generic.go:334] "Generic (PLEG): container finished" podID="b9d62ac5-d483-4086-be8e-e1b7a784701c" containerID="a62b76f71a7c47adce1b29daf39573caadb4dfe783c8e450459230ae1e919fda" exitCode=0 Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.562155 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" event={"ID":"b9d62ac5-d483-4086-be8e-e1b7a784701c","Type":"ContainerDied","Data":"a62b76f71a7c47adce1b29daf39573caadb4dfe783c8e450459230ae1e919fda"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.582986 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" event={"ID":"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4","Type":"ContainerStarted","Data":"45b337c2ce6dae3eb4c5d76cb0d365deb83d9e4ab7cc7e637b23af09afd520c8"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.591967 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podStartSLOduration=177.591944095 podStartE2EDuration="2m57.591944095s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.591415969 +0000 UTC m=+237.064736512" watchObservedRunningTime="2026-02-26 14:17:38.591944095 +0000 UTC m=+237.065264628" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.594498 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podStartSLOduration=178.59448773 podStartE2EDuration="2m58.59448773s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.541098266 +0000 UTC m=+237.014418789" watchObservedRunningTime="2026-02-26 14:17:38.59448773 +0000 UTC m=+237.067808253" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.603264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.604811 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.104796834 +0000 UTC m=+237.578117357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.631675 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p47ss" podStartSLOduration=177.631652285 podStartE2EDuration="2m57.631652285s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.630404728 +0000 UTC m=+237.103725251" watchObservedRunningTime="2026-02-26 14:17:38.631652285 +0000 UTC m=+237.104972798" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.633404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" event={"ID":"0cf0a064-9313-441b-9ab2-19a3b64ec281","Type":"ContainerStarted","Data":"96b209cd20c494dd55cb38621fa81fc4f8b8310ed2cb3f638ee56778e8635edb"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.686869 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.689128 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.689165 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.707526 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.709045 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.209004915 +0000 UTC m=+237.682325438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.722824 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" podStartSLOduration=177.722805422 podStartE2EDuration="2m57.722805422s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.721328659 +0000 UTC m=+237.194649192" watchObservedRunningTime="2026-02-26 14:17:38.722805422 +0000 UTC m=+237.196125945" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.725828 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" event={"ID":"85de89d6-b550-49ac-b2e6-ec83ae54cac8","Type":"ContainerStarted","Data":"6350c7447089f3a93e5c4f82c2c8ae8ecf62fa25a8e4f64fa0aef96ed1fafc69"} Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.726571 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.726612 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.809789 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.825479 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.325458738 +0000 UTC m=+237.798779261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.912124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.912323 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.412282387 +0000 UTC m=+237.885602910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:38 crc kubenswrapper[4809]: I0226 14:17:38.912767 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:38 crc kubenswrapper[4809]: E0226 14:17:38.913093 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.41307723 +0000 UTC m=+237.886397753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.013732 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.014222 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.514200401 +0000 UTC m=+237.987520924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.115242 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.115814 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.615787145 +0000 UTC m=+238.089107918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.216026 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.216429 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.716406241 +0000 UTC m=+238.189726764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.216860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.217271 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.717263506 +0000 UTC m=+238.190584029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.318291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.318737 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.818712406 +0000 UTC m=+238.292032929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.420146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.420517 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:39.920499647 +0000 UTC m=+238.393820170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.520887 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.521272 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:40.021252826 +0000 UTC m=+238.494573349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.646404 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.646804 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:40.146784287 +0000 UTC m=+238.620104810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.683888 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:39 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:39 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:39 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.683945 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.732484 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" event={"ID":"a71f5fc0-296c-47c7-ae8b-63cddaa00c27","Type":"ContainerStarted","Data":"05d9a95192325ee146a2b0d60901c75d880402428e3b85a5603c015407242a25"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.733482 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.734979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" event={"ID":"fbade11b-78dc-4961-8b28-3d1493bab84c","Type":"ContainerStarted","Data":"ebc26281c0d9477b6e6de99bba0bf23da1fad5f7e71ddebc5b5b43b1794d2556"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.735829 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.735862 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.739641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" event={"ID":"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f","Type":"ContainerStarted","Data":"7e850119debc8ad35609b5d21bd4aef5b626fa1750a33f4f81a970e16f691e35"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.739675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" event={"ID":"f2bb0d0e-f2f7-4cd6-80d7-1361b474874f","Type":"ContainerStarted","Data":"4febf1288e83d107987c86d49fd8ac40660597645121dd85f502c440b0daef16"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.747266 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.747600 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:40.247581156 +0000 UTC m=+238.720901679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.757961 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc4c2ace-f831-4413-b703-522b24da3a71" containerID="922b0cd06b07abb44f0268e616d5b2f854c8690898125d6e47c32003d9aa177c" exitCode=0 Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.758053 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerDied","Data":"922b0cd06b07abb44f0268e616d5b2f854c8690898125d6e47c32003d9aa177c"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.758079 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerStarted","Data":"2f9e19bbdea4703579a451ff399b2af13b69ddcd0374ca261dfb05f14ac244d2"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.758635 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.761912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" event={"ID":"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4","Type":"ContainerStarted","Data":"4059177222f3c679dea14169b798e0eb9c78b5c16851087fd93402118418704b"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.763509 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.763627 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.763657 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.772273 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cvtbt" event={"ID":"2917e129-ff3e-417c-86f3-0625613663de","Type":"ContainerStarted","Data":"a909133f8f6228bd9028db7e58bc279d1b6a3488b9a129be155f145bf4278c7f"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.775285 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7f2nt" podStartSLOduration=179.775273372 podStartE2EDuration="2m59.775273372s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:38.770752825 +0000 UTC m=+237.244073348" watchObservedRunningTime="2026-02-26 14:17:39.775273372 +0000 UTC m=+238.248593895" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.776245 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" podStartSLOduration=178.776240371 podStartE2EDuration="2m58.776240371s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:39.77180358 +0000 UTC m=+238.245124103" watchObservedRunningTime="2026-02-26 14:17:39.776240371 +0000 UTC m=+238.249560894" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.782766 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" event={"ID":"49550fbb-c382-4ee2-9f93-fb53816fb1c7","Type":"ContainerStarted","Data":"abd307f01fe7f0cd5d768f2d693b973cae15427739b4c81c009ba3e7a2d661ce"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.783134 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.785066 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qpw8j container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.785112 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" podUID="49550fbb-c382-4ee2-9f93-fb53816fb1c7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.789146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" event={"ID":"420b577e-f310-4cc8-bc79-a2abcb837bbe","Type":"ContainerStarted","Data":"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.789858 4809 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rs49n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.789932 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.798837 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mxjxl" event={"ID":"6a66e5a6-e6ca-4258-97c0-c9bb502b9b4b","Type":"ContainerStarted","Data":"c7808b9eb811e2501da6ad6e63cda5e3c73f931cad43425a676da7a0e6efd1d3"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.799039 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.801150 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podStartSLOduration=178.801120524 podStartE2EDuration="2m58.801120524s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:39.799501607 +0000 UTC m=+238.272822130" watchObservedRunningTime="2026-02-26 14:17:39.801120524 +0000 UTC m=+238.274441057" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.806175 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-l65jj" event={"ID":"ae850862-0d3f-4f13-b723-0a0a66d1bda7","Type":"ContainerStarted","Data":"7fe02ce9a3590f94eb28da8afb9787c3998bdba187666643bd34238d37e8abae"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.815997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" event={"ID":"b9d62ac5-d483-4086-be8e-e1b7a784701c","Type":"ContainerStarted","Data":"c8eb1a0e726d62b78e09916f029b69783cfeb9e0fe140921a9604587c9736ff7"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.822745 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gpx9n" event={"ID":"0cf0a064-9313-441b-9ab2-19a3b64ec281","Type":"ContainerStarted","Data":"4c116e3046a61471d3d9137e3016389335630a71edcecff7865977f868fafde2"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.823971 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8mq5j" podStartSLOduration=178.823945977 podStartE2EDuration="2m58.823945977s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:39.822417572 +0000 UTC m=+238.295738095" watchObservedRunningTime="2026-02-26 14:17:39.823945977 +0000 UTC m=+238.297266500" Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.832537 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nq6xq" event={"ID":"2cfda986-0167-488b-b585-5212627c9f28","Type":"ContainerStarted","Data":"a3f891b25296eaead67918b256ad5ad20c397e266996d30b3802682b5b76af5a"} Feb 26 14:17:39 crc kubenswrapper[4809]: I0226 14:17:39.849211 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:39 crc kubenswrapper[4809]: E0226 14:17:39.850826 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:40.350807619 +0000 UTC m=+238.824128142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.525048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:40 crc kubenswrapper[4809]: E0226 14:17:40.525617 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.525588933 +0000 UTC m=+239.998909456 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.561171 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podStartSLOduration=180.561148175 podStartE2EDuration="3m0.561148175s" podCreationTimestamp="2026-02-26 14:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.560844166 +0000 UTC m=+239.034164689" watchObservedRunningTime="2026-02-26 14:17:40.561148175 +0000 UTC m=+239.034468708" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.580558 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" event={"ID":"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea","Type":"ContainerStarted","Data":"8e403f20ae2ae1146674a38c6ae232bef71ff95b43464da137bf81ad26fa211e"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.580625 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" event={"ID":"3a6d0c64-c5d9-4bef-a8e0-5a82ee950dea","Type":"ContainerStarted","Data":"0f8fa94f68088ee652813d8eb48602ceb99bce597ace08874da214e2c6825ab2"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.581298 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" event={"ID":"52765744-e1b6-4600-8037-d144a9dc61ab","Type":"ContainerStarted","Data":"b5d81d6acb3c4adfab77eb75c85338fcc65ef029ed5c679e86a10a57035e3a7c"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.581324 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" event={"ID":"52765744-e1b6-4600-8037-d144a9dc61ab","Type":"ContainerStarted","Data":"e5ea1e0ce56950a588c566e43417e8b59c205fa57f4c079b09217194e63ab305"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.605789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" event={"ID":"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1","Type":"ContainerStarted","Data":"a7314f8e958f6d3d19c4228b8311455a6cb0a96688a26c89724be0b0f68878e2"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.605853 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" event={"ID":"6e67602b-e831-4e88-8f32-e6fa8e2a9ab1","Type":"ContainerStarted","Data":"55b313b9771f7926a4283d5a906811ae13f652518c621253422c4fa40e36ff46"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.617607 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hnnzr" event={"ID":"d0be611c-e33e-479e-ba69-f2c1ee615b74","Type":"ContainerStarted","Data":"55fff2a03368ae4a0a8d7b5514df25a7bf70fe04d6700a716d478c39f7642246"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.626651 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:40 crc kubenswrapper[4809]: E0226 14:17:40.627953 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.127933525 +0000 UTC m=+239.601254048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.635375 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" event={"ID":"27674e3b-1fb9-4e3a-83d9-2b77ccd40571","Type":"ContainerStarted","Data":"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.636271 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mxjxl" podStartSLOduration=9.636242999 podStartE2EDuration="9.636242999s" podCreationTimestamp="2026-02-26 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.599458605 +0000 UTC m=+239.072779128" watchObservedRunningTime="2026-02-26 14:17:40.636242999 +0000 UTC m=+239.109563532" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.636705 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" podStartSLOduration=179.636698193 podStartE2EDuration="2m59.636698193s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.624911096 +0000 UTC m=+239.098231629" watchObservedRunningTime="2026-02-26 14:17:40.636698193 +0000 UTC m=+239.110018716" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.636803 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.640661 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v58kd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.640718 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.646908 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" event={"ID":"c2048c3a-d91f-4ef5-93e1-41a621001c94","Type":"ContainerStarted","Data":"062d1e05019b76e8d1a4c7213d80a62c8cb63f00bd7d68b6a5d7f53899958740"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.651329 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" event={"ID":"47b66962-ad36-46e0-be47-cfc6cdd6bcc0","Type":"ContainerStarted","Data":"97c6f97eabcb7fa24ccd9873c590f9f48838984f53d2d093ad639376fccc4496"} Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.665726 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.665791 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.666751 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xfdk4" podStartSLOduration=179.666718838 podStartE2EDuration="2m59.666718838s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.655497457 +0000 UTC m=+239.128817980" watchObservedRunningTime="2026-02-26 14:17:40.666718838 +0000 UTC m=+239.140039361" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.680586 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:40 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:40 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:40 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.680738 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.710381 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" podStartSLOduration=179.710337843 podStartE2EDuration="2m59.710337843s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.689507789 +0000 UTC m=+239.162828312" watchObservedRunningTime="2026-02-26 14:17:40.710337843 +0000 UTC m=+239.183658576" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.712645 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5cv4q" podStartSLOduration=179.712634161 podStartE2EDuration="2m59.712634161s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.704371868 +0000 UTC m=+239.177692421" watchObservedRunningTime="2026-02-26 14:17:40.712634161 +0000 UTC m=+239.185954684" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.728186 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn62q" podStartSLOduration=179.728168049 podStartE2EDuration="2m59.728168049s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.726994664 +0000 UTC m=+239.200315187" watchObservedRunningTime="2026-02-26 14:17:40.728168049 +0000 UTC m=+239.201488572" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.735771 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:40 crc kubenswrapper[4809]: E0226 14:17:40.738958 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.238928356 +0000 UTC m=+239.712248879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.765281 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vm52c" podStartSLOduration=179.765260452 podStartE2EDuration="2m59.765260452s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.755637289 +0000 UTC m=+239.228957812" watchObservedRunningTime="2026-02-26 14:17:40.765260452 +0000 UTC m=+239.238580975" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.800143 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" podStartSLOduration=179.800111599 podStartE2EDuration="2m59.800111599s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:40.782770198 +0000 UTC m=+239.256090721" watchObservedRunningTime="2026-02-26 14:17:40.800111599 +0000 UTC m=+239.273432122" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.829983 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54372: no serving certificate available for the kubelet" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.836743 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:40 crc kubenswrapper[4809]: E0226 14:17:40.838204 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.338178862 +0000 UTC m=+239.811499395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.932140 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54376: no serving certificate available for the kubelet" Feb 26 14:17:40 crc kubenswrapper[4809]: I0226 14:17:40.939234 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:40 crc kubenswrapper[4809]: E0226 14:17:40.939636 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.439622212 +0000 UTC m=+239.912942725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.040846 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.041082 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.541040591 +0000 UTC m=+240.014361114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.041132 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.041602 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.541593597 +0000 UTC m=+240.014914120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.044540 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54386: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.079541 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54400: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.140050 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54416: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.142549 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.142901 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.642870952 +0000 UTC m=+240.116191475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.142944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.143476 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.64345937 +0000 UTC m=+240.116780073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.233575 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54420: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.244158 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.244365 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.744326703 +0000 UTC m=+240.217647226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.244466 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.244968 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.744947971 +0000 UTC m=+240.218268494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.330632 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54426: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.345879 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.346095 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.846065251 +0000 UTC m=+240.319385774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.346384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.346757 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.846740861 +0000 UTC m=+240.320061384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.447779 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.448144 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:41.9481255 +0000 UTC m=+240.421446023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.549393 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.549757 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.049739405 +0000 UTC m=+240.523059928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.650181 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.650406 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.150370151 +0000 UTC m=+240.623690704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.650619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.650991 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.150950448 +0000 UTC m=+240.624271151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.657643 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" event={"ID":"0b74b8aa-c615-4cbe-a08f-2781174e2596","Type":"ContainerStarted","Data":"7d6d9411e8a10d76f003c48050b0de85977c3213c8dd26709c9490668259045a"} Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.658724 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v58kd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.658805 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.660301 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.660356 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.668190 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.677603 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.704625 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:41 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:41 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:41 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.704682 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.706989 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54432: no serving certificate available for the kubelet" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.751965 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.752205 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.252171491 +0000 UTC m=+240.725492014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.752979 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.757996 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.257976032 +0000 UTC m=+240.731296705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.796603 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.796674 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.859129 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.859517 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.359496055 +0000 UTC m=+240.832816578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.900903 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" Feb 26 14:17:41 crc kubenswrapper[4809]: I0226 14:17:41.960428 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:41 crc kubenswrapper[4809]: E0226 14:17:41.960824 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.460810901 +0000 UTC m=+240.934131424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.061280 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.061614 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.561596782 +0000 UTC m=+241.034917305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.163379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.163816 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.663796304 +0000 UTC m=+241.137116827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.253280 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.254549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.263326 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.264513 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.265925 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.266969 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.766932754 +0000 UTC m=+241.240253277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.286485 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.369960 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.370060 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.370101 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.370523 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.870505167 +0000 UTC m=+241.343825690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.393062 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.471395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.471551 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.471671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.471816 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:42.971796252 +0000 UTC m=+241.445116775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.471854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.512322 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54438: no serving certificate available for the kubelet" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.573062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.573400 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.073387167 +0000 UTC m=+241.546707690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.573525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.611272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.668879 4809 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rs49n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.668931 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.673991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.675003 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.174980711 +0000 UTC m=+241.648301234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.675365 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:42 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:42 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:42 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.675399 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.775509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.777098 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.277085741 +0000 UTC m=+241.750406264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.871603 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.871993 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" containerID="cri-o://037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156" gracePeriod=30 Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.876933 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.877151 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.377108659 +0000 UTC m=+241.850429182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.877406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.877891 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.377871701 +0000 UTC m=+241.851192224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.936761 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.936996 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerName="route-controller-manager" containerID="cri-o://336983f357ca66f38878973ca5b297d225544cd2b0a3a733cc2de4976aad7f7e" gracePeriod=30 Feb 26 14:17:42 crc kubenswrapper[4809]: I0226 14:17:42.978424 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:42 crc kubenswrapper[4809]: E0226 14:17:42.978856 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.478837387 +0000 UTC m=+241.952157910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.081051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.081726 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.581712749 +0000 UTC m=+242.055033272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.182820 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.183290 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.683268773 +0000 UTC m=+242.156589296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.190886 4809 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pzw5h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.190968 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.200624 4809 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4vxzc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]log ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]etcd ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/max-in-flight-filter ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 14:17:43 crc kubenswrapper[4809]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/project.openshift.io-projectcache ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/openshift.io-startinformers ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 14:17:43 crc kubenswrapper[4809]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 14:17:43 crc kubenswrapper[4809]: livez check failed Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.200710 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" podUID="3d875f76-8d31-46f5-9fcc-20d2868e7c2f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.232165 4809 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vhtz4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.232254 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.262163 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.263544 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.264945 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.271798 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.278958 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.284898 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.285278 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.785262049 +0000 UTC m=+242.258582582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.288547 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.386526 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.387194 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.887137571 +0000 UTC m=+242.360458094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.396261 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4d2\" (UniqueName: \"kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.396640 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.396934 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.397107 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.397809 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.897792955 +0000 UTC m=+242.371113468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.467678 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.475754 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.492098 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.498048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.498363 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4d2\" (UniqueName: \"kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.498460 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.498521 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.498944 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.499110 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:43.999001538 +0000 UTC m=+242.472322061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.499226 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.499676 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.546933 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4d2\" (UniqueName: \"kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2\") pod \"community-operators-45bqj\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.597301 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.599866 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjmn\" (UniqueName: \"kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.599949 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.600364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.600432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.600802 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.100783928 +0000 UTC m=+242.574104451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.613863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.624838 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.625192 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.625209 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.625340 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerName="controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.626306 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.644038 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.680039 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:43 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:43 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:43 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.680112 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.690661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e0ceefd-980c-412b-a75b-ea8ba1c95a19","Type":"ContainerStarted","Data":"322d4030110cbb226560c0f4ac2212197e33351e0423ab6ed2f4e521ee9e38b6"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.694035 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerID="336983f357ca66f38878973ca5b297d225544cd2b0a3a733cc2de4976aad7f7e" exitCode=0 Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.694127 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" event={"ID":"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2","Type":"ContainerDied","Data":"336983f357ca66f38878973ca5b297d225544cd2b0a3a733cc2de4976aad7f7e"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.694154 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" event={"ID":"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2","Type":"ContainerDied","Data":"b15b9f77fe9c148b59aff8affa7b56be2511123974f2b2e243bdca416d1ace33"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.694166 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15b9f77fe9c148b59aff8affa7b56be2511123974f2b2e243bdca416d1ace33" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.697926 4809 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701343 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config\") pod \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701436 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4zjx\" (UniqueName: \"kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx\") pod \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701487 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert\") pod \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701582 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca\") pod \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701632 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles\") pod \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\" (UID: \"1a54ce01-6b5f-4e57-8069-e5380a6e153f\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701757 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.701978 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpjmn\" (UniqueName: \"kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.702035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwn2\" (UniqueName: \"kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.702074 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.702161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.702195 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.702223 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.703465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config" (OuterVolumeSpecName: "config") pod "1a54ce01-6b5f-4e57-8069-e5380a6e153f" (UID: "1a54ce01-6b5f-4e57-8069-e5380a6e153f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.704951 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.204926698 +0000 UTC m=+242.678247221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.708178 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.708272 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.708503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" event={"ID":"0b74b8aa-c615-4cbe-a08f-2781174e2596","Type":"ContainerStarted","Data":"285e902be34002cdde7ac8b27bfe4cf97fd46806d0c15d45a7c1b9d454f8dc22"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.708550 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" event={"ID":"0b74b8aa-c615-4cbe-a08f-2781174e2596","Type":"ContainerStarted","Data":"d835ab406b5b55693eb42fd55a4943cbf1ff2f9897508d48c46c8a6b842421ca"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.708927 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a54ce01-6b5f-4e57-8069-e5380a6e153f" (UID: "1a54ce01-6b5f-4e57-8069-e5380a6e153f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.713837 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a54ce01-6b5f-4e57-8069-e5380a6e153f" (UID: "1a54ce01-6b5f-4e57-8069-e5380a6e153f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.728153 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a54ce01-6b5f-4e57-8069-e5380a6e153f" (UID: "1a54ce01-6b5f-4e57-8069-e5380a6e153f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.731217 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx" (OuterVolumeSpecName: "kube-api-access-h4zjx") pod "1a54ce01-6b5f-4e57-8069-e5380a6e153f" (UID: "1a54ce01-6b5f-4e57-8069-e5380a6e153f"). InnerVolumeSpecName "kube-api-access-h4zjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.733313 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.734331 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpjmn\" (UniqueName: \"kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn\") pod \"certified-operators-kdrnc\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.737830 4809 generic.go:334] "Generic (PLEG): container finished" podID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" containerID="037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156" exitCode=0 Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.737978 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.739305 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" event={"ID":"1a54ce01-6b5f-4e57-8069-e5380a6e153f","Type":"ContainerDied","Data":"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.739355 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vhtz4" event={"ID":"1a54ce01-6b5f-4e57-8069-e5380a6e153f","Type":"ContainerDied","Data":"0cd0cb086ff0cde39867104fdb651bba796e87492bab236622b9c8f5b0a5b9cb"} Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.739382 4809 scope.go:117] "RemoveContainer" containerID="037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.790025 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.799918 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vhtz4"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.802980 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config\") pod \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803157 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca\") pod \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803205 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfckt\" (UniqueName: \"kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt\") pod \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803497 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert\") pod \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\" (UID: \"ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803737 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.803848 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chwn2\" (UniqueName: \"kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804620 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804640 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4zjx\" (UniqueName: \"kubernetes.io/projected/1a54ce01-6b5f-4e57-8069-e5380a6e153f-kube-api-access-h4zjx\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804656 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a54ce01-6b5f-4e57-8069-e5380a6e153f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804668 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.804680 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a54ce01-6b5f-4e57-8069-e5380a6e153f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.806433 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config" (OuterVolumeSpecName: "config") pod "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" (UID: "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.807029 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" (UID: "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.810103 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.810434 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.813599 4809 scope.go:117] "RemoveContainer" containerID="037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.814257 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.314233619 +0000 UTC m=+242.787554142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.814823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt" (OuterVolumeSpecName: "kube-api-access-pfckt") pod "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" (UID: "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2"). InnerVolumeSpecName "kube-api-access-pfckt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.814887 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" (UID: "ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.816378 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156\": container with ID starting with 037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156 not found: ID does not exist" containerID="037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.816433 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156"} err="failed to get container status \"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156\": rpc error: code = NotFound desc = could not find container \"037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156\": container with ID starting with 037a95a86a5fb30c35b6374dcd371f2a9be02eb319fcb3eda3bae83e2283e156 not found: ID does not exist" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.820889 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.836196 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.836486 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerName="route-controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.836502 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerName="route-controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.836636 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" containerName="route-controller-manager" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.845970 4809 ???:1] "http: TLS handshake error from 192.168.126.11:54444: no serving certificate available for the kubelet" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.847550 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.849118 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chwn2\" (UniqueName: \"kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2\") pod \"community-operators-v9hcf\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.853891 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.906692 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907505 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkfc\" (UniqueName: \"kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:43 crc kubenswrapper[4809]: E0226 14:17:43.907604 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.407575731 +0000 UTC m=+242.880896254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907751 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907798 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfckt\" (UniqueName: \"kubernetes.io/projected/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-kube-api-access-pfckt\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907813 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907831 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.907842 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.964074 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.964134 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.964428 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 26 14:17:43 crc kubenswrapper[4809]: I0226 14:17:43.964444 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.010620 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.012277 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.012350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.012383 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkfc\" (UniqueName: \"kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.012438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.013120 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.013460 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.513439311 +0000 UTC m=+242.986759824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.019175 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.043097 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkfc\" (UniqueName: \"kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc\") pod \"certified-operators-7k8zw\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.069365 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.109910 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.111663 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.112843 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.114223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.114455 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.614412917 +0000 UTC m=+243.087733440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.114534 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.114925 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.614909852 +0000 UTC m=+243.088230375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.116633 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.118472 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.197411 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.197904 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.197948 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.211824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.216294 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.217440 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.717422743 +0000 UTC m=+243.190743266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.217485 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.217615 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.217660 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.218004 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.71799635 +0000 UTC m=+243.191316873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.275455 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a54ce01-6b5f-4e57-8069-e5380a6e153f" path="/var/lib/kubelet/pods/1a54ce01-6b5f-4e57-8069-e5380a6e153f/volumes" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.276404 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.276430 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.278052 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.288280 4809 patch_prober.go:28] interesting pod/console-f9d7485db-c2d27 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.288356 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c2d27" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.321273 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.321526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.321598 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.322738 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.822714967 +0000 UTC m=+243.296035490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.325271 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.335551 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.351814 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: W0226 14:17:44.360622 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa922741_f315_4b16_af68_7d3a26d8604c.slice/crio-a1612a6f3ea323145f53a4586530331568f9ccaabe0d46ded9f1d965b916ea8f WatchSource:0}: Error finding container a1612a6f3ea323145f53a4586530331568f9ccaabe0d46ded9f1d965b916ea8f: Status 404 returned error can't find the container with id a1612a6f3ea323145f53a4586530331568f9ccaabe0d46ded9f1d965b916ea8f Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.424556 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.425003 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:44.924978801 +0000 UTC m=+243.398299324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.468918 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.474341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:44 crc kubenswrapper[4809]: W0226 14:17:44.518106 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52e94bad_9e41_4386_9746_264b0fa96b35.slice/crio-f9bc9b1723a77db9ce73d0c6345cb3788fe9e3ef59d9575d956e2b0854dad5b4 WatchSource:0}: Error finding container f9bc9b1723a77db9ce73d0c6345cb3788fe9e3ef59d9575d956e2b0854dad5b4: Status 404 returned error can't find the container with id f9bc9b1723a77db9ce73d0c6345cb3788fe9e3ef59d9575d956e2b0854dad5b4 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.526416 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.526724 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 14:17:45.026697549 +0000 UTC m=+243.500018082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.527815 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: E0226 14:17:44.528232 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 14:17:45.028219764 +0000 UTC m=+243.501540287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hr5qh" (UID: "911a7065-8744-4237-a986-118263d49bb0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.543652 4809 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T14:17:43.698369084Z","Handler":null,"Name":""} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.555157 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.557847 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.558280 4809 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.558321 4809 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.561180 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.564403 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.565202 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.565379 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.567961 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.568164 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.568426 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.577056 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.577262 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.579391 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.582972 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.586850 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.629786 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630301 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630575 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630629 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630703 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmgj\" (UniqueName: \"kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.630958 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm57k\" (UniqueName: \"kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.631788 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.634203 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.674444 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.682338 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:44 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:44 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:44 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.682396 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.733659 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734047 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734110 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm57k\" (UniqueName: \"kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734155 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734289 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734403 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmgj\" (UniqueName: \"kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.734916 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.742822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.743898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.744255 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.745128 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.754168 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmgj\" (UniqueName: \"kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.757113 4809 generic.go:334] "Generic (PLEG): container finished" podID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerID="473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3" exitCode=0 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.757193 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.757235 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.757433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerDied","Data":"473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.758120 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerStarted","Data":"1d3cc1c88c1b82e4ecc40d135000de399fd816a02a9d7088f79505325006c20b"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.757857 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.782984 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert\") pod \"controller-manager-845574b8bd-mwjs9\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.800417 4809 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.803940 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm57k\" (UniqueName: \"kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k\") pod \"route-controller-manager-7b5dd9989c-lw54d\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.829813 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa922741-f315-4b16-af68-7d3a26d8604c" containerID="ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280" exitCode=0 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.830350 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerDied","Data":"ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.830411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerStarted","Data":"a1612a6f3ea323145f53a4586530331568f9ccaabe0d46ded9f1d965b916ea8f"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.839113 4809 generic.go:334] "Generic (PLEG): container finished" podID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerID="d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9" exitCode=0 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.839179 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerDied","Data":"d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.839207 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerStarted","Data":"3880eff705a607e223d9160d08d7c2739df3920d026c3ed2dd79a283f3f2fee4"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.856371 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hr5qh\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.860097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerStarted","Data":"f9bc9b1723a77db9ce73d0c6345cb3788fe9e3ef59d9575d956e2b0854dad5b4"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.867910 4809 generic.go:334] "Generic (PLEG): container finished" podID="7e0ceefd-980c-412b-a75b-ea8ba1c95a19" containerID="e960ed4a47f17d9dca3d81d7a80cd241bc914ccf386ff5ff79162557e2194e55" exitCode=0 Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.867974 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e0ceefd-980c-412b-a75b-ea8ba1c95a19","Type":"ContainerDied","Data":"e960ed4a47f17d9dca3d81d7a80cd241bc914ccf386ff5ff79162557e2194e55"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.874315 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" event={"ID":"0b74b8aa-c615-4cbe-a08f-2781174e2596","Type":"ContainerStarted","Data":"3186a9fa1ac6168ae14aa4196ea8b53596cd4724fdea5f44daeb21a5a265f4a4"} Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.877501 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.904851 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.904968 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.922868 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" podStartSLOduration=13.922851036 podStartE2EDuration="13.922851036s" podCreationTimestamp="2026-02-26 14:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:44.9195889 +0000 UTC m=+243.392909423" watchObservedRunningTime="2026-02-26 14:17:44.922851036 +0000 UTC m=+243.396171559" Feb 26 14:17:44 crc kubenswrapper[4809]: I0226 14:17:44.948113 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.006947 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.027258 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzw5h"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.058191 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.069798 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.107635 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.115118 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.295440 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.410090 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.425554 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.427083 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.430368 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.438639 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:17:45 crc kubenswrapper[4809]: W0226 14:17:45.449044 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911a7065_8744_4237_a986_118263d49bb0.slice/crio-721b4888ab01ec44e19c5a26969536a6b8739ac4e068c10497822a7056c970bf WatchSource:0}: Error finding container 721b4888ab01ec44e19c5a26969536a6b8739ac4e068c10497822a7056c970bf: Status 404 returned error can't find the container with id 721b4888ab01ec44e19c5a26969536a6b8739ac4e068c10497822a7056c970bf Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.468125 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.468180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.468345 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66sjq\" (UniqueName: \"kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.473994 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:17:45 crc kubenswrapper[4809]: W0226 14:17:45.480354 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a7d90b_dbeb_4f0d_b576_21100d495d15.slice/crio-9ccd663ef28be8621591653a59d3de8825c7370b891103467eeec4f3c2686c35 WatchSource:0}: Error finding container 9ccd663ef28be8621591653a59d3de8825c7370b891103467eeec4f3c2686c35: Status 404 returned error can't find the container with id 9ccd663ef28be8621591653a59d3de8825c7370b891103467eeec4f3c2686c35 Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.584439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66sjq\" (UniqueName: \"kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.584938 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.584975 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.585521 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.585592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.605079 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66sjq\" (UniqueName: \"kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq\") pod \"redhat-marketplace-r2kqz\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.675862 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:45 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:45 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:45 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.675930 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.746811 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.856058 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.857826 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.875273 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.904404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" event={"ID":"6209aac2-e87a-4569-9a7c-81db7b662e7a","Type":"ContainerStarted","Data":"a0a26c5499f1ef31a8c5db29fa1390c602919453293f51b5430efb744f628cd3"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.904470 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" event={"ID":"6209aac2-e87a-4569-9a7c-81db7b662e7a","Type":"ContainerStarted","Data":"d05dbe3d95efe4d778e64c34bb9bb68e09be282629b74f4b4bcaf2ed1c6d41dd"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.905493 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.934726 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" event={"ID":"96a7d90b-dbeb-4f0d-b576-21100d495d15","Type":"ContainerStarted","Data":"3fbdd218e042236f0ae25487a4060de553f14dc0541796f427c13ddc62a4f91a"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.934789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" event={"ID":"96a7d90b-dbeb-4f0d-b576-21100d495d15","Type":"ContainerStarted","Data":"9ccd663ef28be8621591653a59d3de8825c7370b891103467eeec4f3c2686c35"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.935808 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.937261 4809 patch_prober.go:28] interesting pod/route-controller-manager-7b5dd9989c-lw54d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.937551 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.950717 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" podStartSLOduration=2.9506922209999997 podStartE2EDuration="2.950692221s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:45.949272639 +0000 UTC m=+244.422593172" watchObservedRunningTime="2026-02-26 14:17:45.950692221 +0000 UTC m=+244.424012754" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.984629 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" podStartSLOduration=2.984607501 podStartE2EDuration="2.984607501s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:45.979566902 +0000 UTC m=+244.452887415" watchObservedRunningTime="2026-02-26 14:17:45.984607501 +0000 UTC m=+244.457928024" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.991401 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86","Type":"ContainerStarted","Data":"0864617d800c335650de789b24fbc240f8427b5d044e1228baf3e82960f7ca88"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.991455 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86","Type":"ContainerStarted","Data":"2c5dd9da0f6f58a7719f3fd6780700bbb9cd7dad59324914976c813ebeb2755c"} Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.992780 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.993326 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.993359 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:45 crc kubenswrapper[4809]: I0226 14:17:45.993431 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7797m\" (UniqueName: \"kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.008326 4809 generic.go:334] "Generic (PLEG): container finished" podID="52e94bad-9e41-4386-9746-264b0fa96b35" containerID="65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc" exitCode=0 Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.008403 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerDied","Data":"65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc"} Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.017772 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.017755558 podStartE2EDuration="2.017755558s" podCreationTimestamp="2026-02-26 14:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:46.01748854 +0000 UTC m=+244.490809063" watchObservedRunningTime="2026-02-26 14:17:46.017755558 +0000 UTC m=+244.491076081" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.019632 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" event={"ID":"911a7065-8744-4237-a986-118263d49bb0","Type":"ContainerStarted","Data":"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be"} Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.019682 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.019692 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" event={"ID":"911a7065-8744-4237-a986-118263d49bb0","Type":"ContainerStarted","Data":"721b4888ab01ec44e19c5a26969536a6b8739ac4e068c10497822a7056c970bf"} Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.095135 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.095218 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.095367 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7797m\" (UniqueName: \"kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.095679 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.096515 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.139211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7797m\" (UniqueName: \"kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m\") pod \"redhat-marketplace-l8g9s\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.184412 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.188083 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" podStartSLOduration=185.188062208 podStartE2EDuration="3m5.188062208s" podCreationTimestamp="2026-02-26 14:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:17:46.100559259 +0000 UTC m=+244.573879802" watchObservedRunningTime="2026-02-26 14:17:46.188062208 +0000 UTC m=+244.661382731" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.188553 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:17:46 crc kubenswrapper[4809]: W0226 14:17:46.214195 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f47aa1_3b5e_4e70_b27f_88ff985a0104.slice/crio-0cf592e4b62bfb1b58602844a0bb9e785c15e45a5b70c97ef0c086a5e3d37851 WatchSource:0}: Error finding container 0cf592e4b62bfb1b58602844a0bb9e785c15e45a5b70c97ef0c086a5e3d37851: Status 404 returned error can't find the container with id 0cf592e4b62bfb1b58602844a0bb9e785c15e45a5b70c97ef0c086a5e3d37851 Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.266789 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.267449 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2" path="/var/lib/kubelet/pods/ccc7eec6-0afe-49cb-a6bc-d3689a3b34a2/volumes" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.373033 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.425417 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:17:46 crc kubenswrapper[4809]: E0226 14:17:46.425871 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0ceefd-980c-412b-a75b-ea8ba1c95a19" containerName="pruner" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.425897 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0ceefd-980c-412b-a75b-ea8ba1c95a19" containerName="pruner" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.426055 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0ceefd-980c-412b-a75b-ea8ba1c95a19" containerName="pruner" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.429549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.437659 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.438322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.445094 4809 ???:1] "http: TLS handshake error from 192.168.126.11:44210: no serving certificate available for the kubelet" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.454287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:17:46 crc kubenswrapper[4809]: W0226 14:17:46.468396 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f7c55c_ff28_4d72_aa11_17908ebe8c26.slice/crio-7defb4ce7f5fd066da2281f25b9c55ce7455ceb0e26c874fef3f730083e454d6 WatchSource:0}: Error finding container 7defb4ce7f5fd066da2281f25b9c55ce7455ceb0e26c874fef3f730083e454d6: Status 404 returned error can't find the container with id 7defb4ce7f5fd066da2281f25b9c55ce7455ceb0e26c874fef3f730083e454d6 Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.501712 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir\") pod \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.501890 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access\") pod \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\" (UID: \"7e0ceefd-980c-412b-a75b-ea8ba1c95a19\") " Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.502355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.502387 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rzz\" (UniqueName: \"kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.502362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e0ceefd-980c-412b-a75b-ea8ba1c95a19" (UID: "7e0ceefd-980c-412b-a75b-ea8ba1c95a19"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.502602 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.509249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e0ceefd-980c-412b-a75b-ea8ba1c95a19" (UID: "7e0ceefd-980c-412b-a75b-ea8ba1c95a19"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.542368 4809 ???:1] "http: TLS handshake error from 192.168.126.11:44214: no serving certificate available for the kubelet" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.605098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.605495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rzz\" (UniqueName: \"kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.605966 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.606251 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.606266 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0ceefd-980c-412b-a75b-ea8ba1c95a19-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.606766 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.633806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.638286 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rzz\" (UniqueName: \"kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz\") pod \"redhat-operators-jwxvj\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.677756 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:46 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:46 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:46 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.677832 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.758639 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.821573 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.822665 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.838164 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.911474 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.911676 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:46 crc kubenswrapper[4809]: I0226 14:17:46.911705 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bs2\" (UniqueName: \"kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.001671 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:17:47 crc kubenswrapper[4809]: W0226 14:17:47.009806 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0cd2a65_4aaf_4322_8e24_ca1aa935c510.slice/crio-f2b2f3115ee6936ac1324047ebec1f4851149c3b933f448f644e45fde0e19cac WatchSource:0}: Error finding container f2b2f3115ee6936ac1324047ebec1f4851149c3b933f448f644e45fde0e19cac: Status 404 returned error can't find the container with id f2b2f3115ee6936ac1324047ebec1f4851149c3b933f448f644e45fde0e19cac Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.012446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.012499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bs2\" (UniqueName: \"kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.012557 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.013222 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.015113 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.029698 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerStarted","Data":"0cf592e4b62bfb1b58602844a0bb9e785c15e45a5b70c97ef0c086a5e3d37851"} Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.031490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerStarted","Data":"7defb4ce7f5fd066da2281f25b9c55ce7455ceb0e26c874fef3f730083e454d6"} Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.031995 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bs2\" (UniqueName: \"kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2\") pod \"redhat-operators-wq8dn\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.032894 4809 generic.go:334] "Generic (PLEG): container finished" podID="c2048c3a-d91f-4ef5-93e1-41a621001c94" containerID="062d1e05019b76e8d1a4c7213d80a62c8cb63f00bd7d68b6a5d7f53899958740" exitCode=0 Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.032924 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" event={"ID":"c2048c3a-d91f-4ef5-93e1-41a621001c94","Type":"ContainerDied","Data":"062d1e05019b76e8d1a4c7213d80a62c8cb63f00bd7d68b6a5d7f53899958740"} Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.035782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e0ceefd-980c-412b-a75b-ea8ba1c95a19","Type":"ContainerDied","Data":"322d4030110cbb226560c0f4ac2212197e33351e0423ab6ed2f4e521ee9e38b6"} Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.035808 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322d4030110cbb226560c0f4ac2212197e33351e0423ab6ed2f4e521ee9e38b6" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.035809 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.038118 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerStarted","Data":"f2b2f3115ee6936ac1324047ebec1f4851149c3b933f448f644e45fde0e19cac"} Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.043672 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.142676 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.408739 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.675537 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:47 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:47 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:47 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:47 crc kubenswrapper[4809]: I0226 14:17:47.675862 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.051517 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerID="c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a" exitCode=0 Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.051796 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerDied","Data":"c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a"} Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.056061 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerID="e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb" exitCode=0 Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.056146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerDied","Data":"e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb"} Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.069782 4809 generic.go:334] "Generic (PLEG): container finished" podID="d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" containerID="0864617d800c335650de789b24fbc240f8427b5d044e1228baf3e82960f7ca88" exitCode=0 Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.069965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86","Type":"ContainerDied","Data":"0864617d800c335650de789b24fbc240f8427b5d044e1228baf3e82960f7ca88"} Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.082840 4809 generic.go:334] "Generic (PLEG): container finished" podID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerID="53257f629fc88176eacf29dd535547089226e5c783e51309cc579758f136e51d" exitCode=0 Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.082935 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerDied","Data":"53257f629fc88176eacf29dd535547089226e5c783e51309cc579758f136e51d"} Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.086349 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerStarted","Data":"b3408712ace6bc7a1d6f860cb77e480836eea440acb4b047d71e045d0bc520b7"} Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.163731 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.169894 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.426984 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.447047 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8df6q\" (UniqueName: \"kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q\") pod \"c2048c3a-d91f-4ef5-93e1-41a621001c94\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.447145 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume\") pod \"c2048c3a-d91f-4ef5-93e1-41a621001c94\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.447197 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume\") pod \"c2048c3a-d91f-4ef5-93e1-41a621001c94\" (UID: \"c2048c3a-d91f-4ef5-93e1-41a621001c94\") " Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.448549 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume" (OuterVolumeSpecName: "config-volume") pod "c2048c3a-d91f-4ef5-93e1-41a621001c94" (UID: "c2048c3a-d91f-4ef5-93e1-41a621001c94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.455697 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q" (OuterVolumeSpecName: "kube-api-access-8df6q") pod "c2048c3a-d91f-4ef5-93e1-41a621001c94" (UID: "c2048c3a-d91f-4ef5-93e1-41a621001c94"). InnerVolumeSpecName "kube-api-access-8df6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.457933 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c2048c3a-d91f-4ef5-93e1-41a621001c94" (UID: "c2048c3a-d91f-4ef5-93e1-41a621001c94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.549258 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8df6q\" (UniqueName: \"kubernetes.io/projected/c2048c3a-d91f-4ef5-93e1-41a621001c94-kube-api-access-8df6q\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.549302 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048c3a-d91f-4ef5-93e1-41a621001c94-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.549314 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2048c3a-d91f-4ef5-93e1-41a621001c94-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.676181 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 14:17:48 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Feb 26 14:17:48 crc kubenswrapper[4809]: [+]process-running ok Feb 26 14:17:48 crc kubenswrapper[4809]: healthz check failed Feb 26 14:17:48 crc kubenswrapper[4809]: I0226 14:17:48.676251 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.099089 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" event={"ID":"c2048c3a-d91f-4ef5-93e1-41a621001c94","Type":"ContainerDied","Data":"28c13921e890cc56643fa4c798d466098b923a6ca8da726e714a67073a6db8fd"} Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.099425 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c13921e890cc56643fa4c798d466098b923a6ca8da726e714a67073a6db8fd" Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.099187 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h" Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.103465 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerDied","Data":"4facce88b2154554eca191b0c80bcfaaf41bd3607442af7b8b44eecbfe004f41"} Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.103285 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerID="4facce88b2154554eca191b0c80bcfaaf41bd3607442af7b8b44eecbfe004f41" exitCode=0 Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.283640 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mxjxl" Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.675334 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:49 crc kubenswrapper[4809]: I0226 14:17:49.678658 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dwhvv" Feb 26 14:17:51 crc kubenswrapper[4809]: I0226 14:17:51.680288 4809 ???:1] "http: TLS handshake error from 192.168.126.11:44228: no serving certificate available for the kubelet" Feb 26 14:17:53 crc kubenswrapper[4809]: I0226 14:17:53.964191 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jlgsb" Feb 26 14:17:54 crc kubenswrapper[4809]: I0226 14:17:54.303140 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:54 crc kubenswrapper[4809]: I0226 14:17:54.308535 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:17:56 crc kubenswrapper[4809]: I0226 14:17:56.940936 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.031277 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access\") pod \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.031347 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir\") pod \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\" (UID: \"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86\") " Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.031495 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" (UID: "d89ec8ed-92a6-4593-bf31-ecc35bd3cb86"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.031924 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.037213 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" (UID: "d89ec8ed-92a6-4593-bf31-ecc35bd3cb86"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.133311 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d89ec8ed-92a6-4593-bf31-ecc35bd3cb86-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.173109 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d89ec8ed-92a6-4593-bf31-ecc35bd3cb86","Type":"ContainerDied","Data":"2c5dd9da0f6f58a7719f3fd6780700bbb9cd7dad59324914976c813ebeb2755c"} Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.173155 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5dd9da0f6f58a7719f3fd6780700bbb9cd7dad59324914976c813ebeb2755c" Feb 26 14:17:57 crc kubenswrapper[4809]: I0226 14:17:57.173265 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.127048 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535258-vds27"] Feb 26 14:18:00 crc kubenswrapper[4809]: E0226 14:18:00.127550 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" containerName="pruner" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.127560 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" containerName="pruner" Feb 26 14:18:00 crc kubenswrapper[4809]: E0226 14:18:00.127670 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2048c3a-d91f-4ef5-93e1-41a621001c94" containerName="collect-profiles" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.127679 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2048c3a-d91f-4ef5-93e1-41a621001c94" containerName="collect-profiles" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.128981 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89ec8ed-92a6-4593-bf31-ecc35bd3cb86" containerName="pruner" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.129003 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2048c3a-d91f-4ef5-93e1-41a621001c94" containerName="collect-profiles" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.129530 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.132716 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.135229 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-vds27"] Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.286467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8hb\" (UniqueName: \"kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb\") pod \"auto-csr-approver-29535258-vds27\" (UID: \"7f2d2454-f66e-44f5-82c7-00a32b77db8a\") " pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.387986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8hb\" (UniqueName: \"kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb\") pod \"auto-csr-approver-29535258-vds27\" (UID: \"7f2d2454-f66e-44f5-82c7-00a32b77db8a\") " pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.412064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8hb\" (UniqueName: \"kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb\") pod \"auto-csr-approver-29535258-vds27\" (UID: \"7f2d2454-f66e-44f5-82c7-00a32b77db8a\") " pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:00 crc kubenswrapper[4809]: I0226 14:18:00.451888 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:02 crc kubenswrapper[4809]: I0226 14:18:02.271636 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:18:02 crc kubenswrapper[4809]: I0226 14:18:02.272218 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" containerID="cri-o://a0a26c5499f1ef31a8c5db29fa1390c602919453293f51b5430efb744f628cd3" gracePeriod=30 Feb 26 14:18:02 crc kubenswrapper[4809]: I0226 14:18:02.302235 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:18:02 crc kubenswrapper[4809]: I0226 14:18:02.302493 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" containerID="cri-o://3fbdd218e042236f0ae25487a4060de553f14dc0541796f427c13ddc62a4f91a" gracePeriod=30 Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.237828 4809 generic.go:334] "Generic (PLEG): container finished" podID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerID="a0a26c5499f1ef31a8c5db29fa1390c602919453293f51b5430efb744f628cd3" exitCode=0 Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.237935 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" event={"ID":"6209aac2-e87a-4569-9a7c-81db7b662e7a","Type":"ContainerDied","Data":"a0a26c5499f1ef31a8c5db29fa1390c602919453293f51b5430efb744f628cd3"} Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.239884 4809 generic.go:334] "Generic (PLEG): container finished" podID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerID="3fbdd218e042236f0ae25487a4060de553f14dc0541796f427c13ddc62a4f91a" exitCode=0 Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.239934 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" event={"ID":"96a7d90b-dbeb-4f0d-b576-21100d495d15","Type":"ContainerDied","Data":"3fbdd218e042236f0ae25487a4060de553f14dc0541796f427c13ddc62a4f91a"} Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.949622 4809 patch_prober.go:28] interesting pod/controller-manager-845574b8bd-mwjs9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 26 14:18:04 crc kubenswrapper[4809]: I0226 14:18:04.949683 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 26 14:18:05 crc kubenswrapper[4809]: I0226 14:18:05.059619 4809 patch_prober.go:28] interesting pod/route-controller-manager-7b5dd9989c-lw54d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Feb 26 14:18:05 crc kubenswrapper[4809]: I0226 14:18:05.059898 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Feb 26 14:18:05 crc kubenswrapper[4809]: I0226 14:18:05.122648 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:18:11 crc kubenswrapper[4809]: I0226 14:18:11.794446 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:18:11 crc kubenswrapper[4809]: I0226 14:18:11.795160 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:18:12 crc kubenswrapper[4809]: I0226 14:18:12.194112 4809 ???:1] "http: TLS handshake error from 192.168.126.11:36990: no serving certificate available for the kubelet" Feb 26 14:18:14 crc kubenswrapper[4809]: I0226 14:18:14.684294 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.059445 4809 patch_prober.go:28] interesting pod/route-controller-manager-7b5dd9989c-lw54d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.059516 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.212292 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.213588 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.216058 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.221639 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.226558 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.321748 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.321841 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.423316 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.423411 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.423447 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.445780 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.546981 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.949385 4809 patch_prober.go:28] interesting pod/controller-manager-845574b8bd-mwjs9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": context deadline exceeded" start-of-body= Feb 26 14:18:15 crc kubenswrapper[4809]: I0226 14:18:15.949453 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": context deadline exceeded" Feb 26 14:18:17 crc kubenswrapper[4809]: E0226 14:18:17.790314 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 14:18:20 crc kubenswrapper[4809]: E0226 14:18:17.790802 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:18:20 crc kubenswrapper[4809]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 14:18:20 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4crvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535256-qfv5b_openshift-infra(4611e2a1-2842-4901-b49b-126b928b38f1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 14:18:20 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:18:20 crc kubenswrapper[4809]: E0226 14:18:17.792066 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" Feb 26 14:18:20 crc kubenswrapper[4809]: E0226 14:18:18.311076 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.789961 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.805770 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.820996 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 14:18:20 crc kubenswrapper[4809]: E0226 14:18:20.821278 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.821293 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: E0226 14:18:20.821318 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.821326 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.821421 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" containerName="route-controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.821438 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" containerName="controller-manager" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.821816 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.831119 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.831948 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.854217 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.883556 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924245 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles\") pod \"6209aac2-e87a-4569-9a7c-81db7b662e7a\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924344 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmgj\" (UniqueName: \"kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj\") pod \"6209aac2-e87a-4569-9a7c-81db7b662e7a\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924437 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert\") pod \"96a7d90b-dbeb-4f0d-b576-21100d495d15\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca\") pod \"6209aac2-e87a-4569-9a7c-81db7b662e7a\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924490 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm57k\" (UniqueName: \"kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k\") pod \"96a7d90b-dbeb-4f0d-b576-21100d495d15\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924527 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config\") pod \"96a7d90b-dbeb-4f0d-b576-21100d495d15\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924575 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca\") pod \"96a7d90b-dbeb-4f0d-b576-21100d495d15\" (UID: \"96a7d90b-dbeb-4f0d-b576-21100d495d15\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924601 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config\") pod \"6209aac2-e87a-4569-9a7c-81db7b662e7a\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924619 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert\") pod \"6209aac2-e87a-4569-9a7c-81db7b662e7a\" (UID: \"6209aac2-e87a-4569-9a7c-81db7b662e7a\") " Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924802 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924841 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924869 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjcr\" (UniqueName: \"kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924897 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.924997 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.925050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.925085 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.925106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.925215 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6209aac2-e87a-4569-9a7c-81db7b662e7a" (UID: "6209aac2-e87a-4569-9a7c-81db7b662e7a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.926005 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca" (OuterVolumeSpecName: "client-ca") pod "96a7d90b-dbeb-4f0d-b576-21100d495d15" (UID: "96a7d90b-dbeb-4f0d-b576-21100d495d15"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.926157 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config" (OuterVolumeSpecName: "config") pod "96a7d90b-dbeb-4f0d-b576-21100d495d15" (UID: "96a7d90b-dbeb-4f0d-b576-21100d495d15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.926329 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6209aac2-e87a-4569-9a7c-81db7b662e7a" (UID: "6209aac2-e87a-4569-9a7c-81db7b662e7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.926346 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config" (OuterVolumeSpecName: "config") pod "6209aac2-e87a-4569-9a7c-81db7b662e7a" (UID: "6209aac2-e87a-4569-9a7c-81db7b662e7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.932267 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96a7d90b-dbeb-4f0d-b576-21100d495d15" (UID: "96a7d90b-dbeb-4f0d-b576-21100d495d15"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.932189 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj" (OuterVolumeSpecName: "kube-api-access-vkmgj") pod "6209aac2-e87a-4569-9a7c-81db7b662e7a" (UID: "6209aac2-e87a-4569-9a7c-81db7b662e7a"). InnerVolumeSpecName "kube-api-access-vkmgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.933162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k" (OuterVolumeSpecName: "kube-api-access-hm57k") pod "96a7d90b-dbeb-4f0d-b576-21100d495d15" (UID: "96a7d90b-dbeb-4f0d-b576-21100d495d15"). InnerVolumeSpecName "kube-api-access-hm57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:20 crc kubenswrapper[4809]: I0226 14:18:20.942139 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6209aac2-e87a-4569-9a7c-81db7b662e7a" (UID: "6209aac2-e87a-4569-9a7c-81db7b662e7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026324 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026371 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026426 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026440 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026449 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026559 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjcr\" (UniqueName: \"kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026592 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026848 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026862 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026871 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6209aac2-e87a-4569-9a7c-81db7b662e7a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026881 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026892 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmgj\" (UniqueName: \"kubernetes.io/projected/6209aac2-e87a-4569-9a7c-81db7b662e7a-kube-api-access-vkmgj\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026903 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7d90b-dbeb-4f0d-b576-21100d495d15-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026935 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6209aac2-e87a-4569-9a7c-81db7b662e7a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026947 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm57k\" (UniqueName: \"kubernetes.io/projected/96a7d90b-dbeb-4f0d-b576-21100d495d15-kube-api-access-hm57k\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.026956 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7d90b-dbeb-4f0d-b576-21100d495d15-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.031934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.085511 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.085653 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.086116 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.086717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.087835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.088969 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjcr\" (UniqueName: \"kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr\") pod \"controller-manager-799bd568c6-4r2q5\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.196549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.197428 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.329046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" event={"ID":"96a7d90b-dbeb-4f0d-b576-21100d495d15","Type":"ContainerDied","Data":"9ccd663ef28be8621591653a59d3de8825c7370b891103467eeec4f3c2686c35"} Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.329084 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.329120 4809 scope.go:117] "RemoveContainer" containerID="3fbdd218e042236f0ae25487a4060de553f14dc0541796f427c13ddc62a4f91a" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.331589 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" event={"ID":"6209aac2-e87a-4569-9a7c-81db7b662e7a","Type":"ContainerDied","Data":"d05dbe3d95efe4d778e64c34bb9bb68e09be282629b74f4b4bcaf2ed1c6d41dd"} Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.331742 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845574b8bd-mwjs9" Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.359277 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.362036 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-845574b8bd-mwjs9"] Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.376199 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:18:21 crc kubenswrapper[4809]: I0226 14:18:21.380531 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5dd9989c-lw54d"] Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.271350 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6209aac2-e87a-4569-9a7c-81db7b662e7a" path="/var/lib/kubelet/pods/6209aac2-e87a-4569-9a7c-81db7b662e7a/volumes" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.272973 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a7d90b-dbeb-4f0d-b576-21100d495d15" path="/var/lib/kubelet/pods/96a7d90b-dbeb-4f0d-b576-21100d495d15/volumes" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.273411 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.391245 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.392092 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.393513 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.394204 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.394550 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.394897 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.394952 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.397952 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.407967 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.568110 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.568157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgqf\" (UniqueName: \"kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.568207 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.568242 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.669926 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.670240 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqf\" (UniqueName: \"kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.670367 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.670474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.671514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.671812 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.683668 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:22 crc kubenswrapper[4809]: I0226 14:18:22.720161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqf\" (UniqueName: \"kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf\") pod \"route-controller-manager-86d578bd5b-7kbsp\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:23 crc kubenswrapper[4809]: I0226 14:18:23.009237 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.508713 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.509283 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66sjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r2kqz_openshift-marketplace(f9f47aa1-3b5e-4e70-b27f-88ff985a0104): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\": context canceled" logger="UnhandledError" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.510390 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-r2kqz" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.639555 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.639798 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chwn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-v9hcf_openshift-marketplace(fa922741-f315-4b16-af68-7d3a26d8604c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:28 crc kubenswrapper[4809]: E0226 14:18:28.640988 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-v9hcf" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" Feb 26 14:18:33 crc kubenswrapper[4809]: E0226 14:18:33.573460 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 14:18:33 crc kubenswrapper[4809]: E0226 14:18:33.574089 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6rzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jwxvj_openshift-marketplace(a0cd2a65-4aaf-4322-8e24-ca1aa935c510): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\": context canceled" logger="UnhandledError" Feb 26 14:18:33 crc kubenswrapper[4809]: E0226 14:18:33.575316 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:7ec90947c5e42a6b363a181de1231271558968b64076f26200c96a020ef90893\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-jwxvj" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" Feb 26 14:18:34 crc kubenswrapper[4809]: E0226 14:18:34.535525 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-v9hcf" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" Feb 26 14:18:34 crc kubenswrapper[4809]: E0226 14:18:34.535795 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jwxvj" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" Feb 26 14:18:34 crc kubenswrapper[4809]: E0226 14:18:34.535855 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r2kqz" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.003955 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.004371 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpjmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kdrnc_openshift-marketplace(2312cf07-fe31-4bbd-97ec-b330a5edbe87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.005922 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kdrnc" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.447469 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.447932 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r4d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-45bqj_openshift-marketplace(2328fe45-3fdc-4f65-9377-3e43e72b4b22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:37 crc kubenswrapper[4809]: E0226 14:18:37.449397 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-45bqj" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" Feb 26 14:18:38 crc kubenswrapper[4809]: E0226 14:18:38.476784 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 14:18:38 crc kubenswrapper[4809]: E0226 14:18:38.477041 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gkfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7k8zw_openshift-marketplace(52e94bad-9e41-4386-9746-264b0fa96b35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:38 crc kubenswrapper[4809]: E0226 14:18:38.478802 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7k8zw" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" Feb 26 14:18:41 crc kubenswrapper[4809]: I0226 14:18:41.794256 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:18:41 crc kubenswrapper[4809]: I0226 14:18:41.794624 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:18:41 crc kubenswrapper[4809]: I0226 14:18:41.794692 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:18:41 crc kubenswrapper[4809]: I0226 14:18:41.795518 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:18:41 crc kubenswrapper[4809]: I0226 14:18:41.795595 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02" gracePeriod=600 Feb 26 14:18:44 crc kubenswrapper[4809]: E0226 14:18:44.196864 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-45bqj" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" Feb 26 14:18:44 crc kubenswrapper[4809]: E0226 14:18:44.196884 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kdrnc" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" Feb 26 14:18:44 crc kubenswrapper[4809]: E0226 14:18:44.196884 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7k8zw" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" Feb 26 14:18:44 crc kubenswrapper[4809]: I0226 14:18:44.578845 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-vds27"] Feb 26 14:18:45 crc kubenswrapper[4809]: I0226 14:18:45.457362 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02" exitCode=0 Feb 26 14:18:45 crc kubenswrapper[4809]: I0226 14:18:45.457411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02"} Feb 26 14:18:45 crc kubenswrapper[4809]: E0226 14:18:45.538370 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 14:18:45 crc kubenswrapper[4809]: E0226 14:18:45.538640 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67bs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wq8dn_openshift-marketplace(e1837416-cc54-4d37-ac70-82eb03cdaa83): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:45 crc kubenswrapper[4809]: E0226 14:18:45.539886 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wq8dn" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" Feb 26 14:18:46 crc kubenswrapper[4809]: E0226 14:18:46.807865 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wq8dn" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" Feb 26 14:18:46 crc kubenswrapper[4809]: I0226 14:18:46.851642 4809 scope.go:117] "RemoveContainer" containerID="a0a26c5499f1ef31a8c5db29fa1390c602919453293f51b5430efb744f628cd3" Feb 26 14:18:46 crc kubenswrapper[4809]: W0226 14:18:46.858396 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f2d2454_f66e_44f5_82c7_00a32b77db8a.slice/crio-be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8 WatchSource:0}: Error finding container be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8: Status 404 returned error can't find the container with id be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8 Feb 26 14:18:46 crc kubenswrapper[4809]: E0226 14:18:46.874787 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 14:18:46 crc kubenswrapper[4809]: E0226 14:18:46.874953 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7797m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l8g9s_openshift-marketplace(b1f7c55c-ff28-4d72-aa11-17908ebe8c26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:18:46 crc kubenswrapper[4809]: E0226 14:18:46.876106 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l8g9s" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.091578 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.348064 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.407581 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:47 crc kubenswrapper[4809]: W0226 14:18:47.412644 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b14b95e_36f0_4dac_8b56_505b9ab095de.slice/crio-f34688c658116b9618c531c563277d0f20e347c94ddc28b95f46063fad711759 WatchSource:0}: Error finding container f34688c658116b9618c531c563277d0f20e347c94ddc28b95f46063fad711759: Status 404 returned error can't find the container with id f34688c658116b9618c531c563277d0f20e347c94ddc28b95f46063fad711759 Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.479739 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.507775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccacc64b-b318-406f-bc8c-26c85b64f18b","Type":"ContainerStarted","Data":"3d6afc9975451ac608eb2683efd4bccf8eca5d0796237ef368851956a40f4568"} Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.531234 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed"} Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.547908 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" event={"ID":"4611e2a1-2842-4901-b49b-126b928b38f1","Type":"ContainerStarted","Data":"bbbde2fa9a85e0f8569d13cce3214f943832fc2fcd73aff0947066f6b51495bd"} Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.551821 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" event={"ID":"0b14b95e-36f0-4dac-8b56-505b9ab095de","Type":"ContainerStarted","Data":"f34688c658116b9618c531c563277d0f20e347c94ddc28b95f46063fad711759"} Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.561825 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cff9820-8051-457c-8c23-43e771b351b7","Type":"ContainerStarted","Data":"fcc91048b916ee8a6cf0887433d06d3b5ff91e9d5712be44e9127a3252a1f190"} Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.568210 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-vds27" event={"ID":"7f2d2454-f66e-44f5-82c7-00a32b77db8a","Type":"ContainerStarted","Data":"be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8"} Feb 26 14:18:47 crc kubenswrapper[4809]: E0226 14:18:47.579689 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l8g9s" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.619762 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" podStartSLOduration=98.045096649 podStartE2EDuration="2m47.619737065s" podCreationTimestamp="2026-02-26 14:16:00 +0000 UTC" firstStartedPulling="2026-02-26 14:17:37.43351555 +0000 UTC m=+235.906836073" lastFinishedPulling="2026-02-26 14:18:47.008155966 +0000 UTC m=+305.481476489" observedRunningTime="2026-02-26 14:18:47.598797791 +0000 UTC m=+306.072118314" watchObservedRunningTime="2026-02-26 14:18:47.619737065 +0000 UTC m=+306.093057588" Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.855461 4809 csr.go:261] certificate signing request csr-4dssd is approved, waiting to be issued Feb 26 14:18:47 crc kubenswrapper[4809]: I0226 14:18:47.863291 4809 csr.go:257] certificate signing request csr-4dssd is issued Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.596523 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccacc64b-b318-406f-bc8c-26c85b64f18b","Type":"ContainerStarted","Data":"1f702291bbcf7ec4928e443a2ee399f46b6d2d5948e60022eccf81fc35cb5531"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.599520 4809 generic.go:334] "Generic (PLEG): container finished" podID="4611e2a1-2842-4901-b49b-126b928b38f1" containerID="bbbde2fa9a85e0f8569d13cce3214f943832fc2fcd73aff0947066f6b51495bd" exitCode=0 Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.599572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" event={"ID":"4611e2a1-2842-4901-b49b-126b928b38f1","Type":"ContainerDied","Data":"bbbde2fa9a85e0f8569d13cce3214f943832fc2fcd73aff0947066f6b51495bd"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.601331 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" event={"ID":"0b14b95e-36f0-4dac-8b56-505b9ab095de","Type":"ContainerStarted","Data":"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.601549 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerName="controller-manager" containerID="cri-o://aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0" gracePeriod=30 Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.601711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.603847 4809 generic.go:334] "Generic (PLEG): container finished" podID="5cff9820-8051-457c-8c23-43e771b351b7" containerID="721b4b6ad3dcee584015c48198ea8be826667a2366793ddb71d3d9559a592777" exitCode=0 Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.603905 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cff9820-8051-457c-8c23-43e771b351b7","Type":"ContainerDied","Data":"721b4b6ad3dcee584015c48198ea8be826667a2366793ddb71d3d9559a592777"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.606166 4809 generic.go:334] "Generic (PLEG): container finished" podID="7f2d2454-f66e-44f5-82c7-00a32b77db8a" containerID="19bce38b2ebc193c5058edb2495c4e5fdb2d01f2cc7d055f9c087810f461ba65" exitCode=0 Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.606256 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-vds27" event={"ID":"7f2d2454-f66e-44f5-82c7-00a32b77db8a","Type":"ContainerDied","Data":"19bce38b2ebc193c5058edb2495c4e5fdb2d01f2cc7d055f9c087810f461ba65"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.607610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" event={"ID":"a8ea0839-5d1b-4a7b-8038-29f08e90ce80","Type":"ContainerStarted","Data":"220b9ca928dbae7d20a7774f93e69357cc82261fdf8622dd7b93cc042401a535"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.608159 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" event={"ID":"a8ea0839-5d1b-4a7b-8038-29f08e90ce80","Type":"ContainerStarted","Data":"a61d4e26f15cde25868a70ff760bd0debc78dbd9a7b18be2c23b91a961b46f90"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.608186 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.622879 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa922741-f315-4b16-af68-7d3a26d8604c" containerID="8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719" exitCode=0 Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.623459 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerDied","Data":"8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719"} Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.624221 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=28.624205275 podStartE2EDuration="28.624205275s" podCreationTimestamp="2026-02-26 14:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:18:48.609672434 +0000 UTC m=+307.082992957" watchObservedRunningTime="2026-02-26 14:18:48.624205275 +0000 UTC m=+307.097525798" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.624483 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.631225 4809 patch_prober.go:28] interesting pod/controller-manager-799bd568c6-4r2q5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": read tcp 10.217.0.2:35910->10.217.0.61:8443: read: connection reset by peer" start-of-body= Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.631276 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": read tcp 10.217.0.2:35910->10.217.0.61:8443: read: connection reset by peer" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.638377 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" podStartSLOduration=46.638353093 podStartE2EDuration="46.638353093s" podCreationTimestamp="2026-02-26 14:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:18:48.634112385 +0000 UTC m=+307.107432918" watchObservedRunningTime="2026-02-26 14:18:48.638353093 +0000 UTC m=+307.111673616" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.655418 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" podStartSLOduration=26.65539786 podStartE2EDuration="26.65539786s" podCreationTimestamp="2026-02-26 14:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:18:48.652569964 +0000 UTC m=+307.125890507" watchObservedRunningTime="2026-02-26 14:18:48.65539786 +0000 UTC m=+307.128718383" Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.864524 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-10 04:55:58.066524904 +0000 UTC Feb 26 14:18:48 crc kubenswrapper[4809]: I0226 14:18:48.864984 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6158h37m9.201560181s for next certificate rotation Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.010557 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.046308 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:18:49 crc kubenswrapper[4809]: E0226 14:18:49.046663 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerName="controller-manager" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.046681 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerName="controller-manager" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.046858 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerName="controller-manager" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.047659 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.053838 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.092432 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca\") pod \"0b14b95e-36f0-4dac-8b56-505b9ab095de\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.092544 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzjcr\" (UniqueName: \"kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr\") pod \"0b14b95e-36f0-4dac-8b56-505b9ab095de\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.092590 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config\") pod \"0b14b95e-36f0-4dac-8b56-505b9ab095de\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.092622 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles\") pod \"0b14b95e-36f0-4dac-8b56-505b9ab095de\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.092753 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert\") pod \"0b14b95e-36f0-4dac-8b56-505b9ab095de\" (UID: \"0b14b95e-36f0-4dac-8b56-505b9ab095de\") " Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.093640 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b14b95e-36f0-4dac-8b56-505b9ab095de" (UID: "0b14b95e-36f0-4dac-8b56-505b9ab095de"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.093670 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b14b95e-36f0-4dac-8b56-505b9ab095de" (UID: "0b14b95e-36f0-4dac-8b56-505b9ab095de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.093715 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config" (OuterVolumeSpecName: "config") pod "0b14b95e-36f0-4dac-8b56-505b9ab095de" (UID: "0b14b95e-36f0-4dac-8b56-505b9ab095de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.098517 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr" (OuterVolumeSpecName: "kube-api-access-dzjcr") pod "0b14b95e-36f0-4dac-8b56-505b9ab095de" (UID: "0b14b95e-36f0-4dac-8b56-505b9ab095de"). InnerVolumeSpecName "kube-api-access-dzjcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.098581 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b14b95e-36f0-4dac-8b56-505b9ab095de" (UID: "0b14b95e-36f0-4dac-8b56-505b9ab095de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194722 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhkh\" (UniqueName: \"kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194757 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194777 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194860 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194945 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b14b95e-36f0-4dac-8b56-505b9ab095de-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194958 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194968 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzjcr\" (UniqueName: \"kubernetes.io/projected/0b14b95e-36f0-4dac-8b56-505b9ab095de-kube-api-access-dzjcr\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194978 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.194987 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b14b95e-36f0-4dac-8b56-505b9ab095de-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.298398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.298515 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhkh\" (UniqueName: \"kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.298574 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.298592 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.298682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.299783 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.301281 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.302151 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.304001 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.315138 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhkh\" (UniqueName: \"kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh\") pod \"controller-manager-857b9f49f6-dhts8\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.382775 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.586709 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.629394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" event={"ID":"b8997fcd-287e-4ccc-859e-2b5a1de84558","Type":"ContainerStarted","Data":"3cd23a80fe36383eac1985fa4e7bbd069f6cbaf098aace6afd3875178a1f9fe9"} Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.632449 4809 generic.go:334] "Generic (PLEG): container finished" podID="0b14b95e-36f0-4dac-8b56-505b9ab095de" containerID="aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0" exitCode=0 Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.632511 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.632539 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" event={"ID":"0b14b95e-36f0-4dac-8b56-505b9ab095de","Type":"ContainerDied","Data":"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0"} Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.632572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-799bd568c6-4r2q5" event={"ID":"0b14b95e-36f0-4dac-8b56-505b9ab095de","Type":"ContainerDied","Data":"f34688c658116b9618c531c563277d0f20e347c94ddc28b95f46063fad711759"} Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.632594 4809 scope.go:117] "RemoveContainer" containerID="aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.638587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerStarted","Data":"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507"} Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.655259 4809 scope.go:117] "RemoveContainer" containerID="aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0" Feb 26 14:18:49 crc kubenswrapper[4809]: E0226 14:18:49.656722 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0\": container with ID starting with aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0 not found: ID does not exist" containerID="aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.656798 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0"} err="failed to get container status \"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0\": rpc error: code = NotFound desc = could not find container \"aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0\": container with ID starting with aa329ae525e82dac38aec08032d920c7d3d14159eee499a9e92a163008a994a0 not found: ID does not exist" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.685771 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.689709 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-799bd568c6-4r2q5"] Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.866381 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-17 01:16:52.946941737 +0000 UTC Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.866678 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7042h58m3.080267667s for next certificate rotation Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.876669 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:18:49 crc kubenswrapper[4809]: I0226 14:18:49.937163 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.009599 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crvg\" (UniqueName: \"kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg\") pod \"4611e2a1-2842-4901-b49b-126b928b38f1\" (UID: \"4611e2a1-2842-4901-b49b-126b928b38f1\") " Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.016494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg" (OuterVolumeSpecName: "kube-api-access-4crvg") pod "4611e2a1-2842-4901-b49b-126b928b38f1" (UID: "4611e2a1-2842-4901-b49b-126b928b38f1"). InnerVolumeSpecName "kube-api-access-4crvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.110598 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access\") pod \"5cff9820-8051-457c-8c23-43e771b351b7\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.110697 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir\") pod \"5cff9820-8051-457c-8c23-43e771b351b7\" (UID: \"5cff9820-8051-457c-8c23-43e771b351b7\") " Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.110789 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5cff9820-8051-457c-8c23-43e771b351b7" (UID: "5cff9820-8051-457c-8c23-43e771b351b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.111044 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crvg\" (UniqueName: \"kubernetes.io/projected/4611e2a1-2842-4901-b49b-126b928b38f1-kube-api-access-4crvg\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.111056 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cff9820-8051-457c-8c23-43e771b351b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.114969 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5cff9820-8051-457c-8c23-43e771b351b7" (UID: "5cff9820-8051-457c-8c23-43e771b351b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.153210 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.212858 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cff9820-8051-457c-8c23-43e771b351b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.269639 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b14b95e-36f0-4dac-8b56-505b9ab095de" path="/var/lib/kubelet/pods/0b14b95e-36f0-4dac-8b56-505b9ab095de/volumes" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.313694 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8hb\" (UniqueName: \"kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb\") pod \"7f2d2454-f66e-44f5-82c7-00a32b77db8a\" (UID: \"7f2d2454-f66e-44f5-82c7-00a32b77db8a\") " Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.317800 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb" (OuterVolumeSpecName: "kube-api-access-qt8hb") pod "7f2d2454-f66e-44f5-82c7-00a32b77db8a" (UID: "7f2d2454-f66e-44f5-82c7-00a32b77db8a"). InnerVolumeSpecName "kube-api-access-qt8hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.416067 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt8hb\" (UniqueName: \"kubernetes.io/projected/7f2d2454-f66e-44f5-82c7-00a32b77db8a-kube-api-access-qt8hb\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.644432 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535258-vds27" event={"ID":"7f2d2454-f66e-44f5-82c7-00a32b77db8a","Type":"ContainerDied","Data":"be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.644535 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be45e72205d2e5d494012338cd0210607ccdd8fb09e4e81822b5abb7d7185ce8" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.644469 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535258-vds27" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.646750 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerID="4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507" exitCode=0 Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.646807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerDied","Data":"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.648994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerStarted","Data":"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.650795 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerID="1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757" exitCode=0 Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.650880 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerDied","Data":"1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.655399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" event={"ID":"b8997fcd-287e-4ccc-859e-2b5a1de84558","Type":"ContainerStarted","Data":"ed5deccfcab2beefde025f13a703587a5962ddc3e0c86587c24c543e2c8490d8"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.655877 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.657554 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cff9820-8051-457c-8c23-43e771b351b7","Type":"ContainerDied","Data":"fcc91048b916ee8a6cf0887433d06d3b5ff91e9d5712be44e9127a3252a1f190"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.657581 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc91048b916ee8a6cf0887433d06d3b5ff91e9d5712be44e9127a3252a1f190" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.657627 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.665842 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.668125 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.668140 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535256-qfv5b" event={"ID":"4611e2a1-2842-4901-b49b-126b928b38f1","Type":"ContainerDied","Data":"939a954417be136ae49aec9a088fe124b89c90464bd2a803d25004414f5af299"} Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.668168 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939a954417be136ae49aec9a088fe124b89c90464bd2a803d25004414f5af299" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.691872 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" podStartSLOduration=28.691816973999998 podStartE2EDuration="28.691816974s" podCreationTimestamp="2026-02-26 14:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:18:50.684526533 +0000 UTC m=+309.157847056" watchObservedRunningTime="2026-02-26 14:18:50.691816974 +0000 UTC m=+309.165137487" Feb 26 14:18:50 crc kubenswrapper[4809]: I0226 14:18:50.734206 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v9hcf" podStartSLOduration=2.9989097510000002 podStartE2EDuration="1m7.734184528s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="2026-02-26 14:17:44.833324827 +0000 UTC m=+243.306645350" lastFinishedPulling="2026-02-26 14:18:49.568599604 +0000 UTC m=+308.041920127" observedRunningTime="2026-02-26 14:18:50.733525628 +0000 UTC m=+309.206846191" watchObservedRunningTime="2026-02-26 14:18:50.734184528 +0000 UTC m=+309.207505051" Feb 26 14:18:51 crc kubenswrapper[4809]: I0226 14:18:51.681438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerStarted","Data":"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57"} Feb 26 14:18:51 crc kubenswrapper[4809]: I0226 14:18:51.683508 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerStarted","Data":"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200"} Feb 26 14:18:51 crc kubenswrapper[4809]: I0226 14:18:51.724379 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2kqz" podStartSLOduration=3.570497777 podStartE2EDuration="1m6.724358434s" podCreationTimestamp="2026-02-26 14:17:45 +0000 UTC" firstStartedPulling="2026-02-26 14:17:48.059458756 +0000 UTC m=+246.532779279" lastFinishedPulling="2026-02-26 14:18:51.213319373 +0000 UTC m=+309.686639936" observedRunningTime="2026-02-26 14:18:51.722296731 +0000 UTC m=+310.195617244" watchObservedRunningTime="2026-02-26 14:18:51.724358434 +0000 UTC m=+310.197678957" Feb 26 14:18:51 crc kubenswrapper[4809]: I0226 14:18:51.726371 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jwxvj" podStartSLOduration=2.675968149 podStartE2EDuration="1m5.726364695s" podCreationTimestamp="2026-02-26 14:17:46 +0000 UTC" firstStartedPulling="2026-02-26 14:17:48.053771798 +0000 UTC m=+246.527092321" lastFinishedPulling="2026-02-26 14:18:51.104168344 +0000 UTC m=+309.577488867" observedRunningTime="2026-02-26 14:18:51.706582095 +0000 UTC m=+310.179902608" watchObservedRunningTime="2026-02-26 14:18:51.726364695 +0000 UTC m=+310.199685218" Feb 26 14:18:54 crc kubenswrapper[4809]: I0226 14:18:54.011771 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:54 crc kubenswrapper[4809]: I0226 14:18:54.012328 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:54 crc kubenswrapper[4809]: I0226 14:18:54.180718 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:54 crc kubenswrapper[4809]: I0226 14:18:54.750548 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:55 crc kubenswrapper[4809]: I0226 14:18:55.747997 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:18:55 crc kubenswrapper[4809]: I0226 14:18:55.749078 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:18:55 crc kubenswrapper[4809]: I0226 14:18:55.808716 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:18:56 crc kubenswrapper[4809]: I0226 14:18:56.728526 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerStarted","Data":"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031"} Feb 26 14:18:56 crc kubenswrapper[4809]: I0226 14:18:56.760224 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:18:56 crc kubenswrapper[4809]: I0226 14:18:56.761310 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:18:56 crc kubenswrapper[4809]: I0226 14:18:56.773133 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:18:57 crc kubenswrapper[4809]: I0226 14:18:57.648469 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:18:57 crc kubenswrapper[4809]: I0226 14:18:57.648957 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v9hcf" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="registry-server" containerID="cri-o://0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547" gracePeriod=2 Feb 26 14:18:57 crc kubenswrapper[4809]: I0226 14:18:57.737775 4809 generic.go:334] "Generic (PLEG): container finished" podID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerID="2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031" exitCode=0 Feb 26 14:18:57 crc kubenswrapper[4809]: I0226 14:18:57.737865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerDied","Data":"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031"} Feb 26 14:18:57 crc kubenswrapper[4809]: I0226 14:18:57.810819 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jwxvj" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="registry-server" probeResult="failure" output=< Feb 26 14:18:57 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:18:57 crc kubenswrapper[4809]: > Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.168216 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.199062 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content\") pod \"fa922741-f315-4b16-af68-7d3a26d8604c\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.199127 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chwn2\" (UniqueName: \"kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2\") pod \"fa922741-f315-4b16-af68-7d3a26d8604c\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.200825 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities\") pod \"fa922741-f315-4b16-af68-7d3a26d8604c\" (UID: \"fa922741-f315-4b16-af68-7d3a26d8604c\") " Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.201437 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities" (OuterVolumeSpecName: "utilities") pod "fa922741-f315-4b16-af68-7d3a26d8604c" (UID: "fa922741-f315-4b16-af68-7d3a26d8604c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.202252 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.206260 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2" (OuterVolumeSpecName: "kube-api-access-chwn2") pod "fa922741-f315-4b16-af68-7d3a26d8604c" (UID: "fa922741-f315-4b16-af68-7d3a26d8604c"). InnerVolumeSpecName "kube-api-access-chwn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.258622 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa922741-f315-4b16-af68-7d3a26d8604c" (UID: "fa922741-f315-4b16-af68-7d3a26d8604c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.303729 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa922741-f315-4b16-af68-7d3a26d8604c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.303768 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chwn2\" (UniqueName: \"kubernetes.io/projected/fa922741-f315-4b16-af68-7d3a26d8604c-kube-api-access-chwn2\") on node \"crc\" DevicePath \"\"" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.746449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerStarted","Data":"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd"} Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.749757 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa922741-f315-4b16-af68-7d3a26d8604c" containerID="0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547" exitCode=0 Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.749815 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v9hcf" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.749843 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerDied","Data":"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547"} Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.749896 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v9hcf" event={"ID":"fa922741-f315-4b16-af68-7d3a26d8604c","Type":"ContainerDied","Data":"a1612a6f3ea323145f53a4586530331568f9ccaabe0d46ded9f1d965b916ea8f"} Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.749922 4809 scope.go:117] "RemoveContainer" containerID="0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.751549 4809 generic.go:334] "Generic (PLEG): container finished" podID="52e94bad-9e41-4386-9746-264b0fa96b35" containerID="9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a" exitCode=0 Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.751649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerDied","Data":"9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a"} Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.784984 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdrnc" podStartSLOduration=2.39685133 podStartE2EDuration="1m15.784967792s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="2026-02-26 14:17:44.759945844 +0000 UTC m=+243.233266357" lastFinishedPulling="2026-02-26 14:18:58.148062296 +0000 UTC m=+316.621382819" observedRunningTime="2026-02-26 14:18:58.76938154 +0000 UTC m=+317.242702063" watchObservedRunningTime="2026-02-26 14:18:58.784967792 +0000 UTC m=+317.258288315" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.796728 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.800624 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v9hcf"] Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.805362 4809 scope.go:117] "RemoveContainer" containerID="8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.824713 4809 scope.go:117] "RemoveContainer" containerID="ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.838863 4809 scope.go:117] "RemoveContainer" containerID="0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547" Feb 26 14:18:58 crc kubenswrapper[4809]: E0226 14:18:58.839329 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547\": container with ID starting with 0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547 not found: ID does not exist" containerID="0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.839370 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547"} err="failed to get container status \"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547\": rpc error: code = NotFound desc = could not find container \"0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547\": container with ID starting with 0c2b3b471ed8e47202109591bab92c664b5e90479fec78e66a0ce01061b25547 not found: ID does not exist" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.839400 4809 scope.go:117] "RemoveContainer" containerID="8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719" Feb 26 14:18:58 crc kubenswrapper[4809]: E0226 14:18:58.839788 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719\": container with ID starting with 8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719 not found: ID does not exist" containerID="8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.839844 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719"} err="failed to get container status \"8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719\": rpc error: code = NotFound desc = could not find container \"8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719\": container with ID starting with 8eb016eb4619cdfa13ce3b85e2f0932303cb15dfe3a142abc0ffa1beb5c98719 not found: ID does not exist" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.839880 4809 scope.go:117] "RemoveContainer" containerID="ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280" Feb 26 14:18:58 crc kubenswrapper[4809]: E0226 14:18:58.840197 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280\": container with ID starting with ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280 not found: ID does not exist" containerID="ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280" Feb 26 14:18:58 crc kubenswrapper[4809]: I0226 14:18:58.840227 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280"} err="failed to get container status \"ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280\": rpc error: code = NotFound desc = could not find container \"ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280\": container with ID starting with ad9545c1516cc8ed0913126dc04d2f8293b1597b114dce9be792e628c6432280 not found: ID does not exist" Feb 26 14:18:59 crc kubenswrapper[4809]: I0226 14:18:59.759508 4809 generic.go:334] "Generic (PLEG): container finished" podID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerID="7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2" exitCode=0 Feb 26 14:18:59 crc kubenswrapper[4809]: I0226 14:18:59.759595 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerDied","Data":"7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2"} Feb 26 14:18:59 crc kubenswrapper[4809]: I0226 14:18:59.768392 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerStarted","Data":"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef"} Feb 26 14:18:59 crc kubenswrapper[4809]: I0226 14:18:59.770452 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerStarted","Data":"d628cb21359dbdfe5adbd31bcc5521e896f5e952832fcddf158889bcb1656c4f"} Feb 26 14:18:59 crc kubenswrapper[4809]: I0226 14:18:59.815817 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7k8zw" podStartSLOduration=3.617292767 podStartE2EDuration="1m16.815799311s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="2026-02-26 14:17:46.012964207 +0000 UTC m=+244.486284730" lastFinishedPulling="2026-02-26 14:18:59.211470761 +0000 UTC m=+317.684791274" observedRunningTime="2026-02-26 14:18:59.81477527 +0000 UTC m=+318.288095793" watchObservedRunningTime="2026-02-26 14:18:59.815799311 +0000 UTC m=+318.289119834" Feb 26 14:19:00 crc kubenswrapper[4809]: I0226 14:19:00.262771 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" path="/var/lib/kubelet/pods/fa922741-f315-4b16-af68-7d3a26d8604c/volumes" Feb 26 14:19:00 crc kubenswrapper[4809]: I0226 14:19:00.777075 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerStarted","Data":"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb"} Feb 26 14:19:00 crc kubenswrapper[4809]: I0226 14:19:00.779168 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerID="d628cb21359dbdfe5adbd31bcc5521e896f5e952832fcddf158889bcb1656c4f" exitCode=0 Feb 26 14:19:00 crc kubenswrapper[4809]: I0226 14:19:00.779230 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerDied","Data":"d628cb21359dbdfe5adbd31bcc5521e896f5e952832fcddf158889bcb1656c4f"} Feb 26 14:19:01 crc kubenswrapper[4809]: I0226 14:19:01.807365 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-45bqj" podStartSLOduration=3.516567937 podStartE2EDuration="1m18.807343504s" podCreationTimestamp="2026-02-26 14:17:43 +0000 UTC" firstStartedPulling="2026-02-26 14:17:44.841301432 +0000 UTC m=+243.314621955" lastFinishedPulling="2026-02-26 14:19:00.132076999 +0000 UTC m=+318.605397522" observedRunningTime="2026-02-26 14:19:01.806657764 +0000 UTC m=+320.279978317" watchObservedRunningTime="2026-02-26 14:19:01.807343504 +0000 UTC m=+320.280664027" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.615058 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.616598 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.663170 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.822816 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.822882 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:19:03 crc kubenswrapper[4809]: I0226 14:19:03.862627 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:19:04 crc kubenswrapper[4809]: I0226 14:19:04.197715 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:04 crc kubenswrapper[4809]: I0226 14:19:04.197822 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:04 crc kubenswrapper[4809]: I0226 14:19:04.236086 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:04 crc kubenswrapper[4809]: I0226 14:19:04.834192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:04 crc kubenswrapper[4809]: I0226 14:19:04.885978 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:19:05 crc kubenswrapper[4809]: I0226 14:19:05.807135 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerStarted","Data":"0e1b5c53b957255d436fca772d6ab328583fae31dab7575920ebecc763e36951"} Feb 26 14:19:06 crc kubenswrapper[4809]: I0226 14:19:06.797698 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:19:06 crc kubenswrapper[4809]: I0226 14:19:06.815653 4809 generic.go:334] "Generic (PLEG): container finished" podID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerID="c0e088e9272318067c0730f1345263cce32bbbf6f14011cc373d2bd7b3c83065" exitCode=0 Feb 26 14:19:06 crc kubenswrapper[4809]: I0226 14:19:06.815718 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerDied","Data":"c0e088e9272318067c0730f1345263cce32bbbf6f14011cc373d2bd7b3c83065"} Feb 26 14:19:06 crc kubenswrapper[4809]: I0226 14:19:06.831386 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wq8dn" podStartSLOduration=9.732454305 podStartE2EDuration="1m20.831366633s" podCreationTimestamp="2026-02-26 14:17:46 +0000 UTC" firstStartedPulling="2026-02-26 14:17:54.028164542 +0000 UTC m=+252.501485065" lastFinishedPulling="2026-02-26 14:19:05.12707687 +0000 UTC m=+323.600397393" observedRunningTime="2026-02-26 14:19:06.830606149 +0000 UTC m=+325.303926682" watchObservedRunningTime="2026-02-26 14:19:06.831366633 +0000 UTC m=+325.304687156" Feb 26 14:19:06 crc kubenswrapper[4809]: I0226 14:19:06.840060 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:19:07 crc kubenswrapper[4809]: I0226 14:19:07.143399 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:07 crc kubenswrapper[4809]: I0226 14:19:07.143462 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:07 crc kubenswrapper[4809]: I0226 14:19:07.824133 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerStarted","Data":"497e32c789c283685c4d0f5458f1f88dbc21edb7214757dcc16e92ee5c70026a"} Feb 26 14:19:07 crc kubenswrapper[4809]: I0226 14:19:07.841288 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l8g9s" podStartSLOduration=3.660338238 podStartE2EDuration="1m22.841227407s" podCreationTimestamp="2026-02-26 14:17:45 +0000 UTC" firstStartedPulling="2026-02-26 14:17:48.084438722 +0000 UTC m=+246.557759245" lastFinishedPulling="2026-02-26 14:19:07.265327901 +0000 UTC m=+325.738648414" observedRunningTime="2026-02-26 14:19:07.838055621 +0000 UTC m=+326.311376144" watchObservedRunningTime="2026-02-26 14:19:07.841227407 +0000 UTC m=+326.314547930" Feb 26 14:19:08 crc kubenswrapper[4809]: I0226 14:19:08.177622 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wq8dn" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="registry-server" probeResult="failure" output=< Feb 26 14:19:08 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:19:08 crc kubenswrapper[4809]: > Feb 26 14:19:08 crc kubenswrapper[4809]: I0226 14:19:08.647506 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:19:08 crc kubenswrapper[4809]: I0226 14:19:08.648456 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7k8zw" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="registry-server" containerID="cri-o://972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef" gracePeriod=2 Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.306091 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.464780 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkfc\" (UniqueName: \"kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc\") pod \"52e94bad-9e41-4386-9746-264b0fa96b35\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.464941 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities\") pod \"52e94bad-9e41-4386-9746-264b0fa96b35\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.464980 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content\") pod \"52e94bad-9e41-4386-9746-264b0fa96b35\" (UID: \"52e94bad-9e41-4386-9746-264b0fa96b35\") " Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.466031 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities" (OuterVolumeSpecName: "utilities") pod "52e94bad-9e41-4386-9746-264b0fa96b35" (UID: "52e94bad-9e41-4386-9746-264b0fa96b35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.470223 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc" (OuterVolumeSpecName: "kube-api-access-4gkfc") pod "52e94bad-9e41-4386-9746-264b0fa96b35" (UID: "52e94bad-9e41-4386-9746-264b0fa96b35"). InnerVolumeSpecName "kube-api-access-4gkfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.519542 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52e94bad-9e41-4386-9746-264b0fa96b35" (UID: "52e94bad-9e41-4386-9746-264b0fa96b35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.566565 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.566612 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e94bad-9e41-4386-9746-264b0fa96b35-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.566626 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkfc\" (UniqueName: \"kubernetes.io/projected/52e94bad-9e41-4386-9746-264b0fa96b35-kube-api-access-4gkfc\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.835142 4809 generic.go:334] "Generic (PLEG): container finished" podID="52e94bad-9e41-4386-9746-264b0fa96b35" containerID="972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef" exitCode=0 Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.835185 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerDied","Data":"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef"} Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.835208 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8zw" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.835231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8zw" event={"ID":"52e94bad-9e41-4386-9746-264b0fa96b35","Type":"ContainerDied","Data":"f9bc9b1723a77db9ce73d0c6345cb3788fe9e3ef59d9575d956e2b0854dad5b4"} Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.835250 4809 scope.go:117] "RemoveContainer" containerID="972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.854938 4809 scope.go:117] "RemoveContainer" containerID="9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.867103 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.871010 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7k8zw"] Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.883759 4809 scope.go:117] "RemoveContainer" containerID="65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.897159 4809 scope.go:117] "RemoveContainer" containerID="972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef" Feb 26 14:19:09 crc kubenswrapper[4809]: E0226 14:19:09.897516 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef\": container with ID starting with 972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef not found: ID does not exist" containerID="972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.897557 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef"} err="failed to get container status \"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef\": rpc error: code = NotFound desc = could not find container \"972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef\": container with ID starting with 972f0dc70a3e58381ed1e1476e91fdc0a8a2d7dbe6e1acf8099e50d6b5e4f1ef not found: ID does not exist" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.897585 4809 scope.go:117] "RemoveContainer" containerID="9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a" Feb 26 14:19:09 crc kubenswrapper[4809]: E0226 14:19:09.897897 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a\": container with ID starting with 9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a not found: ID does not exist" containerID="9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.897965 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a"} err="failed to get container status \"9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a\": rpc error: code = NotFound desc = could not find container \"9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a\": container with ID starting with 9c763909a48dc50374036fe0be4c0fb104a27a3da31f0b504100355fe29e8d4a not found: ID does not exist" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.898006 4809 scope.go:117] "RemoveContainer" containerID="65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc" Feb 26 14:19:09 crc kubenswrapper[4809]: E0226 14:19:09.898336 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc\": container with ID starting with 65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc not found: ID does not exist" containerID="65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc" Feb 26 14:19:09 crc kubenswrapper[4809]: I0226 14:19:09.898362 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc"} err="failed to get container status \"65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc\": rpc error: code = NotFound desc = could not find container \"65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc\": container with ID starting with 65dc55ec005a535e36138f37e0c8b9949d5ec7e6c195f8a7ae1829e9912d3afc not found: ID does not exist" Feb 26 14:19:10 crc kubenswrapper[4809]: I0226 14:19:10.264705 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" path="/var/lib/kubelet/pods/52e94bad-9e41-4386-9746-264b0fa96b35/volumes" Feb 26 14:19:13 crc kubenswrapper[4809]: I0226 14:19:13.423932 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rs49n"] Feb 26 14:19:13 crc kubenswrapper[4809]: I0226 14:19:13.651170 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:19:16 crc kubenswrapper[4809]: I0226 14:19:16.186165 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:16 crc kubenswrapper[4809]: I0226 14:19:16.187237 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:16 crc kubenswrapper[4809]: I0226 14:19:16.221205 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:16 crc kubenswrapper[4809]: I0226 14:19:16.916100 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:17 crc kubenswrapper[4809]: I0226 14:19:17.182529 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:17 crc kubenswrapper[4809]: I0226 14:19:17.233213 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:17 crc kubenswrapper[4809]: I0226 14:19:17.848989 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:19:18 crc kubenswrapper[4809]: I0226 14:19:18.890363 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l8g9s" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="registry-server" containerID="cri-o://497e32c789c283685c4d0f5458f1f88dbc21edb7214757dcc16e92ee5c70026a" gracePeriod=2 Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.246769 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.246992 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wq8dn" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="registry-server" containerID="cri-o://0e1b5c53b957255d436fca772d6ab328583fae31dab7575920ebecc763e36951" gracePeriod=2 Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.896899 4809 generic.go:334] "Generic (PLEG): container finished" podID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerID="497e32c789c283685c4d0f5458f1f88dbc21edb7214757dcc16e92ee5c70026a" exitCode=0 Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.896981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerDied","Data":"497e32c789c283685c4d0f5458f1f88dbc21edb7214757dcc16e92ee5c70026a"} Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.899183 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerID="0e1b5c53b957255d436fca772d6ab328583fae31dab7575920ebecc763e36951" exitCode=0 Feb 26 14:19:19 crc kubenswrapper[4809]: I0226 14:19:19.899240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerDied","Data":"0e1b5c53b957255d436fca772d6ab328583fae31dab7575920ebecc763e36951"} Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.655583 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.694269 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828468 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content\") pod \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828525 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bs2\" (UniqueName: \"kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2\") pod \"e1837416-cc54-4d37-ac70-82eb03cdaa83\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828567 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7797m\" (UniqueName: \"kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m\") pod \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828621 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content\") pod \"e1837416-cc54-4d37-ac70-82eb03cdaa83\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828645 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities\") pod \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\" (UID: \"b1f7c55c-ff28-4d72-aa11-17908ebe8c26\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.828675 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities\") pod \"e1837416-cc54-4d37-ac70-82eb03cdaa83\" (UID: \"e1837416-cc54-4d37-ac70-82eb03cdaa83\") " Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.829597 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities" (OuterVolumeSpecName: "utilities") pod "b1f7c55c-ff28-4d72-aa11-17908ebe8c26" (UID: "b1f7c55c-ff28-4d72-aa11-17908ebe8c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.829857 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities" (OuterVolumeSpecName: "utilities") pod "e1837416-cc54-4d37-ac70-82eb03cdaa83" (UID: "e1837416-cc54-4d37-ac70-82eb03cdaa83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.834962 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2" (OuterVolumeSpecName: "kube-api-access-67bs2") pod "e1837416-cc54-4d37-ac70-82eb03cdaa83" (UID: "e1837416-cc54-4d37-ac70-82eb03cdaa83"). InnerVolumeSpecName "kube-api-access-67bs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.835075 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m" (OuterVolumeSpecName: "kube-api-access-7797m") pod "b1f7c55c-ff28-4d72-aa11-17908ebe8c26" (UID: "b1f7c55c-ff28-4d72-aa11-17908ebe8c26"). InnerVolumeSpecName "kube-api-access-7797m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.852852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1f7c55c-ff28-4d72-aa11-17908ebe8c26" (UID: "b1f7c55c-ff28-4d72-aa11-17908ebe8c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.908046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g9s" event={"ID":"b1f7c55c-ff28-4d72-aa11-17908ebe8c26","Type":"ContainerDied","Data":"7defb4ce7f5fd066da2281f25b9c55ce7455ceb0e26c874fef3f730083e454d6"} Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.908075 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g9s" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.908109 4809 scope.go:117] "RemoveContainer" containerID="497e32c789c283685c4d0f5458f1f88dbc21edb7214757dcc16e92ee5c70026a" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.911321 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq8dn" event={"ID":"e1837416-cc54-4d37-ac70-82eb03cdaa83","Type":"ContainerDied","Data":"b3408712ace6bc7a1d6f860cb77e480836eea440acb4b047d71e045d0bc520b7"} Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.911405 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq8dn" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.925210 4809 scope.go:117] "RemoveContainer" containerID="c0e088e9272318067c0730f1345263cce32bbbf6f14011cc373d2bd7b3c83065" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.932735 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.932770 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.932782 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bs2\" (UniqueName: \"kubernetes.io/projected/e1837416-cc54-4d37-ac70-82eb03cdaa83-kube-api-access-67bs2\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.932791 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7797m\" (UniqueName: \"kubernetes.io/projected/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-kube-api-access-7797m\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.932800 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f7c55c-ff28-4d72-aa11-17908ebe8c26-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.933731 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.936377 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g9s"] Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.941335 4809 scope.go:117] "RemoveContainer" containerID="53257f629fc88176eacf29dd535547089226e5c783e51309cc579758f136e51d" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.949440 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1837416-cc54-4d37-ac70-82eb03cdaa83" (UID: "e1837416-cc54-4d37-ac70-82eb03cdaa83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.962805 4809 scope.go:117] "RemoveContainer" containerID="0e1b5c53b957255d436fca772d6ab328583fae31dab7575920ebecc763e36951" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.975208 4809 scope.go:117] "RemoveContainer" containerID="d628cb21359dbdfe5adbd31bcc5521e896f5e952832fcddf158889bcb1656c4f" Feb 26 14:19:20 crc kubenswrapper[4809]: I0226 14:19:20.987878 4809 scope.go:117] "RemoveContainer" containerID="4facce88b2154554eca191b0c80bcfaaf41bd3607442af7b8b44eecbfe004f41" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.034247 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1837416-cc54-4d37-ac70-82eb03cdaa83-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.236310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.236408 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.239222 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.239829 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.250293 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.254445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.255222 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.257883 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wq8dn"] Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.338195 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.338297 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.341303 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.351185 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.363683 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.364511 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.472127 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.479366 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 14:19:21 crc kubenswrapper[4809]: I0226 14:19:21.570118 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:19:22 crc kubenswrapper[4809]: W0226 14:19:22.063812 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-93a1b757cef1cd061d0fdc783f744596d81a33c5fa9cc89f8a5dd612b36fd5ea WatchSource:0}: Error finding container 93a1b757cef1cd061d0fdc783f744596d81a33c5fa9cc89f8a5dd612b36fd5ea: Status 404 returned error can't find the container with id 93a1b757cef1cd061d0fdc783f744596d81a33c5fa9cc89f8a5dd612b36fd5ea Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.264798 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" path="/var/lib/kubelet/pods/b1f7c55c-ff28-4d72-aa11-17908ebe8c26/volumes" Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.265907 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" path="/var/lib/kubelet/pods/e1837416-cc54-4d37-ac70-82eb03cdaa83/volumes" Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.287538 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.288134 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" podUID="b8997fcd-287e-4ccc-859e-2b5a1de84558" containerName="controller-manager" containerID="cri-o://ed5deccfcab2beefde025f13a703587a5962ddc3e0c86587c24c543e2c8490d8" gracePeriod=30 Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.374345 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.374935 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerName="route-controller-manager" containerID="cri-o://220b9ca928dbae7d20a7774f93e69357cc82261fdf8622dd7b93cc042401a535" gracePeriod=30 Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.933863 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e0abc9f92b7b2fe00e8fb1b82e4e6eb93ee773483a6467c06a827c45f964e5cf"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.933920 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"93a1b757cef1cd061d0fdc783f744596d81a33c5fa9cc89f8a5dd612b36fd5ea"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.934149 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.936086 4809 generic.go:334] "Generic (PLEG): container finished" podID="b8997fcd-287e-4ccc-859e-2b5a1de84558" containerID="ed5deccfcab2beefde025f13a703587a5962ddc3e0c86587c24c543e2c8490d8" exitCode=0 Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.936160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" event={"ID":"b8997fcd-287e-4ccc-859e-2b5a1de84558","Type":"ContainerDied","Data":"ed5deccfcab2beefde025f13a703587a5962ddc3e0c86587c24c543e2c8490d8"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.938167 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a75bfa835543fb16c3ee748fc8c473c895c90f85a1b24d02211df4ded7fa985f"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.938207 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2c7ee625079fe3dced10c283d98a63ae62c27079b09a7ee861753dc271eb80d9"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.940284 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerID="220b9ca928dbae7d20a7774f93e69357cc82261fdf8622dd7b93cc042401a535" exitCode=0 Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.940346 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" event={"ID":"a8ea0839-5d1b-4a7b-8038-29f08e90ce80","Type":"ContainerDied","Data":"220b9ca928dbae7d20a7774f93e69357cc82261fdf8622dd7b93cc042401a535"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.941687 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fd2b99f8d4acc08d4c9019ad723997cb01238dc1c01b3a209abd031535631f4e"} Feb 26 14:19:22 crc kubenswrapper[4809]: I0226 14:19:22.941723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e46a2cbdf8df469becb391e6963c6e35cd94ba10c203da38d8b7f3277cb1ce88"} Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.010888 4809 patch_prober.go:28] interesting pod/route-controller-manager-86d578bd5b-7kbsp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.010948 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.341304 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.344886 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369534 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369822 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369832 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369839 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369855 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerName="route-controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369863 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerName="route-controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369873 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369880 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369888 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369896 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369905 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369911 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369922 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369929 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369941 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369949 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369957 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369964 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369975 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369981 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.369992 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8997fcd-287e-4ccc-859e-2b5a1de84558" containerName="controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.369999 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8997fcd-287e-4ccc-859e-2b5a1de84558" containerName="controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372027 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372047 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372083 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cff9820-8051-457c-8c23-43e771b351b7" containerName="pruner" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372114 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cff9820-8051-457c-8c23-43e771b351b7" containerName="pruner" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372127 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372136 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="extract-content" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372181 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372190 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372199 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372207 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="extract-utilities" Feb 26 14:19:23 crc kubenswrapper[4809]: E0226 14:19:23.372219 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2d2454-f66e-44f5-82c7-00a32b77db8a" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372226 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2d2454-f66e-44f5-82c7-00a32b77db8a" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372502 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e94bad-9e41-4386-9746-264b0fa96b35" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372522 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa922741-f315-4b16-af68-7d3a26d8604c" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372530 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f7c55c-ff28-4d72-aa11-17908ebe8c26" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372538 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8997fcd-287e-4ccc-859e-2b5a1de84558" containerName="controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372597 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" containerName="route-controller-manager" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372607 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cff9820-8051-457c-8c23-43e771b351b7" containerName="pruner" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372613 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2d2454-f66e-44f5-82c7-00a32b77db8a" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372621 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1837416-cc54-4d37-ac70-82eb03cdaa83" containerName="registry-server" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.372627 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" containerName="oc" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.374999 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.378191 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467096 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhkh\" (UniqueName: \"kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh\") pod \"b8997fcd-287e-4ccc-859e-2b5a1de84558\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467214 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config\") pod \"b8997fcd-287e-4ccc-859e-2b5a1de84558\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467266 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca\") pod \"b8997fcd-287e-4ccc-859e-2b5a1de84558\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467322 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert\") pod \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467373 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtgqf\" (UniqueName: \"kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf\") pod \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467400 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles\") pod \"b8997fcd-287e-4ccc-859e-2b5a1de84558\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467457 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config\") pod \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca\") pod \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\" (UID: \"a8ea0839-5d1b-4a7b-8038-29f08e90ce80\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467551 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert\") pod \"b8997fcd-287e-4ccc-859e-2b5a1de84558\" (UID: \"b8997fcd-287e-4ccc-859e-2b5a1de84558\") " Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.467753 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.468242 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.468293 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztw8c\" (UniqueName: \"kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.468368 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.468420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.469233 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config" (OuterVolumeSpecName: "config") pod "b8997fcd-287e-4ccc-859e-2b5a1de84558" (UID: "b8997fcd-287e-4ccc-859e-2b5a1de84558"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.469599 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8997fcd-287e-4ccc-859e-2b5a1de84558" (UID: "b8997fcd-287e-4ccc-859e-2b5a1de84558"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.469642 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8ea0839-5d1b-4a7b-8038-29f08e90ce80" (UID: "a8ea0839-5d1b-4a7b-8038-29f08e90ce80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.469610 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config" (OuterVolumeSpecName: "config") pod "a8ea0839-5d1b-4a7b-8038-29f08e90ce80" (UID: "a8ea0839-5d1b-4a7b-8038-29f08e90ce80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.470244 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b8997fcd-287e-4ccc-859e-2b5a1de84558" (UID: "b8997fcd-287e-4ccc-859e-2b5a1de84558"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.473713 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8ea0839-5d1b-4a7b-8038-29f08e90ce80" (UID: "a8ea0839-5d1b-4a7b-8038-29f08e90ce80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.473806 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh" (OuterVolumeSpecName: "kube-api-access-2dhkh") pod "b8997fcd-287e-4ccc-859e-2b5a1de84558" (UID: "b8997fcd-287e-4ccc-859e-2b5a1de84558"). InnerVolumeSpecName "kube-api-access-2dhkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.473932 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8997fcd-287e-4ccc-859e-2b5a1de84558" (UID: "b8997fcd-287e-4ccc-859e-2b5a1de84558"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.478193 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf" (OuterVolumeSpecName: "kube-api-access-vtgqf") pod "a8ea0839-5d1b-4a7b-8038-29f08e90ce80" (UID: "a8ea0839-5d1b-4a7b-8038-29f08e90ce80"). InnerVolumeSpecName "kube-api-access-vtgqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.569785 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.569872 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztw8c\" (UniqueName: \"kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.569916 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.569956 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570158 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570181 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtgqf\" (UniqueName: \"kubernetes.io/projected/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-kube-api-access-vtgqf\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570202 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570218 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570234 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8997fcd-287e-4ccc-859e-2b5a1de84558-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570250 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhkh\" (UniqueName: \"kubernetes.io/projected/b8997fcd-287e-4ccc-859e-2b5a1de84558-kube-api-access-2dhkh\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570267 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570282 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8997fcd-287e-4ccc-859e-2b5a1de84558-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.570297 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ea0839-5d1b-4a7b-8038-29f08e90ce80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.571356 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.571549 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.571691 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.574342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.586036 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztw8c\" (UniqueName: \"kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c\") pod \"controller-manager-55ccd978bc-npnhf\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.613313 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.614376 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.625132 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.696628 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.772636 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.772807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsp9c\" (UniqueName: \"kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.772845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.772872 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.875860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.876291 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsp9c\" (UniqueName: \"kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.876318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.876338 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.877909 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.878311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.884949 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.894632 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsp9c\" (UniqueName: \"kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c\") pod \"route-controller-manager-56879b5cf7-rclqd\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.948861 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.948859 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp" event={"ID":"a8ea0839-5d1b-4a7b-8038-29f08e90ce80","Type":"ContainerDied","Data":"a61d4e26f15cde25868a70ff760bd0debc78dbd9a7b18be2c23b91a961b46f90"} Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.948944 4809 scope.go:117] "RemoveContainer" containerID="220b9ca928dbae7d20a7774f93e69357cc82261fdf8622dd7b93cc042401a535" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.951281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" event={"ID":"b8997fcd-287e-4ccc-859e-2b5a1de84558","Type":"ContainerDied","Data":"3cd23a80fe36383eac1985fa4e7bbd069f6cbaf098aace6afd3875178a1f9fe9"} Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.951303 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857b9f49f6-dhts8" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.958796 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.968683 4809 scope.go:117] "RemoveContainer" containerID="ed5deccfcab2beefde025f13a703587a5962ddc3e0c86587c24c543e2c8490d8" Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.983062 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.987195 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d578bd5b-7kbsp"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.995106 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:19:23 crc kubenswrapper[4809]: I0226 14:19:23.997879 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-857b9f49f6-dhts8"] Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.108493 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:19:24 crc kubenswrapper[4809]: W0226 14:19:24.118474 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311b7185_0675_4fb3_8047_f57c92ad7c1b.slice/crio-fc717865ae26a2bb64a722093b8e8f35818df723a6d1a81502b610c8f0efd699 WatchSource:0}: Error finding container fc717865ae26a2bb64a722093b8e8f35818df723a6d1a81502b610c8f0efd699: Status 404 returned error can't find the container with id fc717865ae26a2bb64a722093b8e8f35818df723a6d1a81502b610c8f0efd699 Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.264458 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ea0839-5d1b-4a7b-8038-29f08e90ce80" path="/var/lib/kubelet/pods/a8ea0839-5d1b-4a7b-8038-29f08e90ce80/volumes" Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.265257 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8997fcd-287e-4ccc-859e-2b5a1de84558" path="/var/lib/kubelet/pods/b8997fcd-287e-4ccc-859e-2b5a1de84558/volumes" Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.381989 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:19:24 crc kubenswrapper[4809]: W0226 14:19:24.386364 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31edfd3d_2f2a_4078_81bc_a8455878a528.slice/crio-4d521683c5951075a540f759c73040b8925cf52318484072b9daf13cb0e0a14a WatchSource:0}: Error finding container 4d521683c5951075a540f759c73040b8925cf52318484072b9daf13cb0e0a14a: Status 404 returned error can't find the container with id 4d521683c5951075a540f759c73040b8925cf52318484072b9daf13cb0e0a14a Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.957887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" event={"ID":"311b7185-0675-4fb3-8047-f57c92ad7c1b","Type":"ContainerStarted","Data":"d22f6cd26ccdca628c0714e7ddb4d5304f13f296de22a4f17166b6c584249592"} Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.958251 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" event={"ID":"311b7185-0675-4fb3-8047-f57c92ad7c1b","Type":"ContainerStarted","Data":"fc717865ae26a2bb64a722093b8e8f35818df723a6d1a81502b610c8f0efd699"} Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.958273 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.959348 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerStarted","Data":"590f10725e7b60311c3b2a069dec3133bc36d2f9493a98daafdc83e452f7737d"} Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.959384 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerStarted","Data":"4d521683c5951075a540f759c73040b8925cf52318484072b9daf13cb0e0a14a"} Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.965056 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:19:24 crc kubenswrapper[4809]: I0226 14:19:24.979323 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" podStartSLOduration=2.97930431 podStartE2EDuration="2.97930431s" podCreationTimestamp="2026-02-26 14:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:19:24.978143715 +0000 UTC m=+343.451464238" watchObservedRunningTime="2026-02-26 14:19:24.97930431 +0000 UTC m=+343.452624833" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.373438 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374299 4809 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374558 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374705 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded" gracePeriod=15 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374745 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6" gracePeriod=15 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374903 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e" gracePeriod=15 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374897 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083" gracePeriod=15 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.374971 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1" gracePeriod=15 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.376713 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377571 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377594 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377606 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377613 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377620 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377627 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377640 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377647 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377655 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377660 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377671 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377678 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377690 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377697 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377703 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377711 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377809 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377820 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377833 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377841 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377850 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377857 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377863 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377981 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.377987 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: E0226 14:19:25.377996 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.378002 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.378116 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.378126 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.500937 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.500990 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501076 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501111 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501154 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501788 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.501863 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605696 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605716 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605744 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605746 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605818 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605846 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.605804 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.606280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.606431 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.606580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.606734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.970180 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.971477 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.972262 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6" exitCode=0 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.972296 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1" exitCode=0 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.972305 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083" exitCode=0 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.972314 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e" exitCode=2 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.972361 4809 scope.go:117] "RemoveContainer" containerID="5727a2dc7170f9b30c4b2d45ea156c81e562a85edc25b5d8919cc582566dea9a" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.973799 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccacc64b-b318-406f-bc8c-26c85b64f18b" containerID="1f702291bbcf7ec4928e443a2ee399f46b6d2d5948e60022eccf81fc35cb5531" exitCode=0 Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.973894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccacc64b-b318-406f-bc8c-26c85b64f18b","Type":"ContainerDied","Data":"1f702291bbcf7ec4928e443a2ee399f46b6d2d5948e60022eccf81fc35cb5531"} Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.974858 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.980185 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.980674 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.981327 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.981738 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:25 crc kubenswrapper[4809]: I0226 14:19:25.982004 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:26 crc kubenswrapper[4809]: I0226 14:19:26.983875 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.281576 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.282225 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.282545 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428428 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir\") pod \"ccacc64b-b318-406f-bc8c-26c85b64f18b\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428544 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock\") pod \"ccacc64b-b318-406f-bc8c-26c85b64f18b\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428585 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ccacc64b-b318-406f-bc8c-26c85b64f18b" (UID: "ccacc64b-b318-406f-bc8c-26c85b64f18b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428605 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock" (OuterVolumeSpecName: "var-lock") pod "ccacc64b-b318-406f-bc8c-26c85b64f18b" (UID: "ccacc64b-b318-406f-bc8c-26c85b64f18b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428681 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access\") pod \"ccacc64b-b318-406f-bc8c-26c85b64f18b\" (UID: \"ccacc64b-b318-406f-bc8c-26c85b64f18b\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428978 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.428994 4809 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccacc64b-b318-406f-bc8c-26c85b64f18b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.437439 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ccacc64b-b318-406f-bc8c-26c85b64f18b" (UID: "ccacc64b-b318-406f-bc8c-26c85b64f18b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.529821 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccacc64b-b318-406f-bc8c-26c85b64f18b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.846250 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.847759 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.849299 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.850217 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.850769 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936254 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936322 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936376 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936457 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936514 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936561 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936778 4809 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936805 4809 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.936823 4809 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.998799 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccacc64b-b318-406f-bc8c-26c85b64f18b","Type":"ContainerDied","Data":"3d6afc9975451ac608eb2683efd4bccf8eca5d0796237ef368851956a40f4568"} Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.998842 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6afc9975451ac608eb2683efd4bccf8eca5d0796237ef368851956a40f4568" Feb 26 14:19:27 crc kubenswrapper[4809]: I0226 14:19:27.998843 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.002856 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.003694 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded" exitCode=0 Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.003756 4809 scope.go:117] "RemoveContainer" containerID="460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.003864 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.014595 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.015147 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.015826 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.020580 4809 scope.go:117] "RemoveContainer" containerID="2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.029485 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.029922 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.030377 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.038344 4809 scope.go:117] "RemoveContainer" containerID="ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.051909 4809 scope.go:117] "RemoveContainer" containerID="6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.068852 4809 scope.go:117] "RemoveContainer" containerID="e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.088079 4809 scope.go:117] "RemoveContainer" containerID="b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.103479 4809 scope.go:117] "RemoveContainer" containerID="460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.103922 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\": container with ID starting with 460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6 not found: ID does not exist" containerID="460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.103952 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6"} err="failed to get container status \"460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\": rpc error: code = NotFound desc = could not find container \"460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6\": container with ID starting with 460b82d2c571300b495f09a728f889f4093aab066abc383e42feffab85607ca6 not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.103976 4809 scope.go:117] "RemoveContainer" containerID="2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.104373 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\": container with ID starting with 2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1 not found: ID does not exist" containerID="2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.104392 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1"} err="failed to get container status \"2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\": rpc error: code = NotFound desc = could not find container \"2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1\": container with ID starting with 2282c364980845d105e1ef3a7011f0ff615e59ed494ed44c2cdd32c526fa63d1 not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.104411 4809 scope.go:117] "RemoveContainer" containerID="ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.104782 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\": container with ID starting with ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083 not found: ID does not exist" containerID="ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.104836 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083"} err="failed to get container status \"ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\": rpc error: code = NotFound desc = could not find container \"ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083\": container with ID starting with ddbe07b3f1b0d527741254c1de0f397b2f0781187d35c40c4e8563d819b2e083 not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.104857 4809 scope.go:117] "RemoveContainer" containerID="6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.105243 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\": container with ID starting with 6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e not found: ID does not exist" containerID="6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.105269 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e"} err="failed to get container status \"6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\": rpc error: code = NotFound desc = could not find container \"6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e\": container with ID starting with 6a04404e977fbac5c0da564f0d030523e45f8bcdbcc3c63641e679e4c45fbe4e not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.105290 4809 scope.go:117] "RemoveContainer" containerID="e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.105561 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\": container with ID starting with e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded not found: ID does not exist" containerID="e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.105585 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded"} err="failed to get container status \"e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\": rpc error: code = NotFound desc = could not find container \"e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded\": container with ID starting with e869472a6214d609ac33c0ef4b24f7bd82f315253c40b46c43e2041874b8fded not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.105603 4809 scope.go:117] "RemoveContainer" containerID="b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed" Feb 26 14:19:28 crc kubenswrapper[4809]: E0226 14:19:28.105918 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\": container with ID starting with b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed not found: ID does not exist" containerID="b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.106424 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed"} err="failed to get container status \"b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\": rpc error: code = NotFound desc = could not find container \"b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed\": container with ID starting with b3352858a7f529ad9021474a2419b305107d7e2b4324aab1873f4f4c5cbb17ed not found: ID does not exist" Feb 26 14:19:28 crc kubenswrapper[4809]: I0226 14:19:28.266576 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 14:19:30 crc kubenswrapper[4809]: E0226 14:19:30.419960 4809 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:30 crc kubenswrapper[4809]: I0226 14:19:30.420510 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:30 crc kubenswrapper[4809]: E0226 14:19:30.449784 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897d1b8e5914446 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:19:30.44926983 +0000 UTC m=+348.922590383,LastTimestamp:2026-02-26 14:19:30.44926983 +0000 UTC m=+348.922590383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.024099 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aa45ac35c7512a34ae370d2e12a2fa220e546dbe465318d3a58f6995a3117737"} Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.024445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7f6a2444f6ed8d76608a694e34e38af5258914a08fea75f40558c544475b3e42"} Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.025347 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:31 crc kubenswrapper[4809]: E0226 14:19:31.025490 4809 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.025589 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.026709 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/0.log" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.026744 4809 generic.go:334] "Generic (PLEG): container finished" podID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerID="590f10725e7b60311c3b2a069dec3133bc36d2f9493a98daafdc83e452f7737d" exitCode=255 Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.026771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerDied","Data":"590f10725e7b60311c3b2a069dec3133bc36d2f9493a98daafdc83e452f7737d"} Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.027235 4809 scope.go:117] "RemoveContainer" containerID="590f10725e7b60311c3b2a069dec3133bc36d2f9493a98daafdc83e452f7737d" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.027455 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:31 crc kubenswrapper[4809]: I0226 14:19:31.028156 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.035700 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/0.log" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.035777 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerStarted","Data":"bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6"} Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.037403 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.037481 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.037962 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.260340 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:32 crc kubenswrapper[4809]: I0226 14:19:32.261275 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: I0226 14:19:33.037717 4809 patch_prober.go:28] interesting pod/route-controller-manager-56879b5cf7-rclqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:19:33 crc kubenswrapper[4809]: I0226 14:19:33.037781 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.587628 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.588107 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.588620 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.588883 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.589151 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:33 crc kubenswrapper[4809]: I0226 14:19:33.589179 4809 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.589422 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="200ms" Feb 26 14:19:33 crc kubenswrapper[4809]: E0226 14:19:33.791158 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="400ms" Feb 26 14:19:34 crc kubenswrapper[4809]: I0226 14:19:34.040684 4809 patch_prober.go:28] interesting pod/route-controller-manager-56879b5cf7-rclqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:19:34 crc kubenswrapper[4809]: I0226 14:19:34.040807 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:19:34 crc kubenswrapper[4809]: E0226 14:19:34.191909 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="800ms" Feb 26 14:19:34 crc kubenswrapper[4809]: E0226 14:19:34.992387 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="1.6s" Feb 26 14:19:35 crc kubenswrapper[4809]: I0226 14:19:35.042073 4809 patch_prober.go:28] interesting pod/route-controller-manager-56879b5cf7-rclqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:19:35 crc kubenswrapper[4809]: I0226 14:19:35.042161 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:19:35 crc kubenswrapper[4809]: E0226 14:19:35.798570 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897d1b8e5914446 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 14:19:30.44926983 +0000 UTC m=+348.922590383,LastTimestamp:2026-02-26 14:19:30.44926983 +0000 UTC m=+348.922590383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 14:19:36 crc kubenswrapper[4809]: E0226 14:19:36.594157 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="3.2s" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.070549 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.071346 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.071401 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2" exitCode=1 Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.071447 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2"} Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.072214 4809 scope.go:117] "RemoveContainer" containerID="e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.072332 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.072629 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.073914 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.452879 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" containerID="cri-o://239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988" gracePeriod=15 Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.958765 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.959563 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.960090 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.960749 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:38 crc kubenswrapper[4809]: I0226 14:19:38.961093 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071371 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071511 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071574 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071628 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071932 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.071935 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzgxh\" (UniqueName: \"kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.072937 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073055 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073112 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073158 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073212 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073330 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073380 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073422 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073454 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template\") pod \"420b577e-f310-4cc8-bc79-a2abcb837bbe\" (UID: \"420b577e-f310-4cc8-bc79-a2abcb837bbe\") " Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073840 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.073879 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.074357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.074771 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.074843 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.078397 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.080818 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh" (OuterVolumeSpecName: "kube-api-access-bzgxh") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "kube-api-access-bzgxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.083426 4809 generic.go:334] "Generic (PLEG): container finished" podID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerID="239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988" exitCode=0 Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.083482 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" event={"ID":"420b577e-f310-4cc8-bc79-a2abcb837bbe","Type":"ContainerDied","Data":"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988"} Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.083503 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.083537 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" event={"ID":"420b577e-f310-4cc8-bc79-a2abcb837bbe","Type":"ContainerDied","Data":"a6f4470ea3f13d2f319d82281df9013458392161ffd98dfc65a8a08acd27f5fd"} Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.083563 4809 scope.go:117] "RemoveContainer" containerID="239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.084276 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.084576 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.084872 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.085327 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.089984 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.090540 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.090922 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.090886 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.091274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.091068 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.091792 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.091880 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "420b577e-f310-4cc8-bc79-a2abcb837bbe" (UID: "420b577e-f310-4cc8-bc79-a2abcb837bbe"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.091992 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.092069 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f"} Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.093210 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.093661 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.094203 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.094664 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.136734 4809 scope.go:117] "RemoveContainer" containerID="239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988" Feb 26 14:19:39 crc kubenswrapper[4809]: E0226 14:19:39.137391 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988\": container with ID starting with 239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988 not found: ID does not exist" containerID="239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.137436 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988"} err="failed to get container status \"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988\": rpc error: code = NotFound desc = could not find container \"239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988\": container with ID starting with 239f2a9667be0dd73aa488058c639d8fa03b03c491f94cbba9b74a515177f988 not found: ID does not exist" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174828 4809 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/420b577e-f310-4cc8-bc79-a2abcb837bbe-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174889 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174907 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174921 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174933 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174946 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzgxh\" (UniqueName: \"kubernetes.io/projected/420b577e-f310-4cc8-bc79-a2abcb837bbe-kube-api-access-bzgxh\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174958 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174968 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174979 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.174990 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.175001 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.175034 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/420b577e-f310-4cc8-bc79-a2abcb837bbe-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.255755 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.256947 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.257551 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.258092 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.258523 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.271870 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.271898 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:39 crc kubenswrapper[4809]: E0226 14:19:39.272353 4809 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.274076 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:39 crc kubenswrapper[4809]: W0226 14:19:39.310357 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-be2707844aacba83929cd24156be956064303c75c269b30658e1f945d910d1c7 WatchSource:0}: Error finding container be2707844aacba83929cd24156be956064303c75c269b30658e1f945d910d1c7: Status 404 returned error can't find the container with id be2707844aacba83929cd24156be956064303c75c269b30658e1f945d910d1c7 Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.403724 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.404361 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.404809 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: I0226 14:19:39.405525 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:39 crc kubenswrapper[4809]: E0226 14:19:39.795991 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="6.4s" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.102504 4809 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="59f577565f5aff6c74c05f743a429e1391186d376c7a2966283ef97b898d8652" exitCode=0 Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.102579 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"59f577565f5aff6c74c05f743a429e1391186d376c7a2966283ef97b898d8652"} Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.102616 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be2707844aacba83929cd24156be956064303c75c269b30658e1f945d910d1c7"} Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.102903 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.102920 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.103418 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:40 crc kubenswrapper[4809]: E0226 14:19:40.103503 4809 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.103760 4809 status_manager.go:851] "Failed to get status for pod" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-56879b5cf7-rclqd\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.104201 4809 status_manager.go:851] "Failed to get status for pod" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" pod="openshift-authentication/oauth-openshift-558db77b4-rs49n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rs49n\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:40 crc kubenswrapper[4809]: I0226 14:19:40.104851 4809 status_manager.go:851] "Failed to get status for pod" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.113665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92a8290b9b86c813f628195fc438ff2f2f165b44e4d3d3f8acb4580663b19c37"} Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.113716 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b67e24e3bdd6d7865d95a0d6c818daacef0467ed784e08dd4aff89dc1f6a092"} Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.113731 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a95eecf6eb726c986d3e32ea96a3005561c84be7db4c8020926d0f5a85d4984a"} Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.113743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c4c3e7b7734fc5ff0b1edae55ac0b08ef6b8ecdfcaec367f82058850adb4386"} Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.171865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.818939 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.819249 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 14:19:41 crc kubenswrapper[4809]: I0226 14:19:41.819318 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 14:19:42 crc kubenswrapper[4809]: I0226 14:19:42.126891 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:42 crc kubenswrapper[4809]: I0226 14:19:42.126957 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:42 crc kubenswrapper[4809]: I0226 14:19:42.127241 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"324a73d2ebb3955b05844f7cfbbd3b6ad8437e5740256ad1062039a0db818415"} Feb 26 14:19:42 crc kubenswrapper[4809]: I0226 14:19:42.127293 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:44 crc kubenswrapper[4809]: I0226 14:19:44.275080 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:44 crc kubenswrapper[4809]: I0226 14:19:44.275142 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:44 crc kubenswrapper[4809]: I0226 14:19:44.282141 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:44 crc kubenswrapper[4809]: I0226 14:19:44.959505 4809 patch_prober.go:28] interesting pod/route-controller-manager-56879b5cf7-rclqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:19:44 crc kubenswrapper[4809]: I0226 14:19:44.959879 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:19:47 crc kubenswrapper[4809]: I0226 14:19:47.136595 4809 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:47 crc kubenswrapper[4809]: I0226 14:19:47.228162 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="51faf42b-0a2d-4f98-bb23-2ea2dae19c28" Feb 26 14:19:48 crc kubenswrapper[4809]: I0226 14:19:48.157230 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:48 crc kubenswrapper[4809]: I0226 14:19:48.157264 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="54c7dc88-43f5-4ab4-a5e2-682aa8aefef2" Feb 26 14:19:48 crc kubenswrapper[4809]: I0226 14:19:48.163835 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="51faf42b-0a2d-4f98-bb23-2ea2dae19c28" Feb 26 14:19:51 crc kubenswrapper[4809]: I0226 14:19:51.818874 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 14:19:51 crc kubenswrapper[4809]: I0226 14:19:51.819242 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 14:19:54 crc kubenswrapper[4809]: I0226 14:19:54.960263 4809 patch_prober.go:28] interesting pod/route-controller-manager-56879b5cf7-rclqd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:19:54 crc kubenswrapper[4809]: I0226 14:19:54.960395 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:19:56 crc kubenswrapper[4809]: I0226 14:19:56.037657 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 14:19:56 crc kubenswrapper[4809]: I0226 14:19:56.202874 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 14:19:56 crc kubenswrapper[4809]: I0226 14:19:56.482160 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 14:19:56 crc kubenswrapper[4809]: I0226 14:19:56.484346 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 14:19:56 crc kubenswrapper[4809]: I0226 14:19:56.740795 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.014113 4809 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.016964 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podStartSLOduration=35.016946723 podStartE2EDuration="35.016946723s" podCreationTimestamp="2026-02-26 14:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:19:47.216928698 +0000 UTC m=+365.690249221" watchObservedRunningTime="2026-02-26 14:19:57.016946723 +0000 UTC m=+375.490267246" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.019093 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rs49n","openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.019187 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.024353 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.026923 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.045469 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=10.045447526 podStartE2EDuration="10.045447526s" podCreationTimestamp="2026-02-26 14:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:19:57.041979661 +0000 UTC m=+375.515300214" watchObservedRunningTime="2026-02-26 14:19:57.045447526 +0000 UTC m=+375.518768059" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.262834 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.311588 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.516582 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.557454 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.574631 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.583536 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.694967 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.920808 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 14:19:57 crc kubenswrapper[4809]: I0226 14:19:57.974487 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.109066 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.118637 4809 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.118872 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://aa45ac35c7512a34ae370d2e12a2fa220e546dbe465318d3a58f6995a3117737" gracePeriod=5 Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.266869 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" path="/var/lib/kubelet/pods/420b577e-f310-4cc8-bc79-a2abcb837bbe/volumes" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.486568 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.511041 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.638183 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.693653 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.797913 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.869586 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 14:19:58 crc kubenswrapper[4809]: I0226 14:19:58.969989 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.002741 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.448207 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.460322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.488876 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.547446 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.569388 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.717666 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 14:19:59 crc kubenswrapper[4809]: I0226 14:19:59.773663 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.261685 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.268857 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.294707 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.679431 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.761116 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.820618 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.841047 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.861830 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 14:20:00 crc kubenswrapper[4809]: I0226 14:20:00.922995 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.073985 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.098381 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.190620 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.275300 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.279364 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.486889 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.517582 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.575113 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.684995 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.696893 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.758047 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.818926 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.819242 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.819313 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.820184 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.820312 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f" gracePeriod=30 Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.913220 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.962825 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 14:20:01 crc kubenswrapper[4809]: I0226 14:20:01.987684 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.046267 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.081076 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.237675 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/1.log" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.238299 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/0.log" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.238354 4809 generic.go:334] "Generic (PLEG): container finished" podID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerID="bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6" exitCode=255 Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.238389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerDied","Data":"bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6"} Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.238429 4809 scope.go:117] "RemoveContainer" containerID="590f10725e7b60311c3b2a069dec3133bc36d2f9493a98daafdc83e452f7737d" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.238969 4809 scope.go:117] "RemoveContainer" containerID="bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6" Feb 26 14:20:02 crc kubenswrapper[4809]: E0226 14:20:02.239224 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-56879b5cf7-rclqd_openshift-route-controller-manager(31edfd3d-2f2a-4078-81bc-a8455878a528)\"" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.370122 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.477380 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.625406 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.644071 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.754793 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.791901 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.850705 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.877961 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.898744 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.965109 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 14:20:02 crc kubenswrapper[4809]: I0226 14:20:02.972656 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.015191 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.059176 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.104620 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.135660 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.142213 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.223941 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.243952 4809 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.246371 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/1.log" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.248395 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.248443 4809 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="aa45ac35c7512a34ae370d2e12a2fa220e546dbe465318d3a58f6995a3117737" exitCode=137 Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.290425 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.294245 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.369093 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.461050 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.461719 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.498579 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.515764 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.571056 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.586544 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.644800 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.672222 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.704710 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.704804 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.710819 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.740322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.803758 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.803805 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.803834 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.803885 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.803925 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.804163 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.804195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.804211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.805148 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.811456 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.811959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.905473 4809 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.905506 4809 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.905516 4809 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.905524 4809 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.905532 4809 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.955611 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.959046 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:20:03 crc kubenswrapper[4809]: I0226 14:20:03.959755 4809 scope.go:117] "RemoveContainer" containerID="bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6" Feb 26 14:20:03 crc kubenswrapper[4809]: E0226 14:20:03.960199 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-56879b5cf7-rclqd_openshift-route-controller-manager(31edfd3d-2f2a-4078-81bc-a8455878a528)\"" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.016942 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.109222 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.196246 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.245183 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.255588 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.255650 4809 scope.go:117] "RemoveContainer" containerID="aa45ac35c7512a34ae370d2e12a2fa220e546dbe465318d3a58f6995a3117737" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.255712 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.263346 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.389239 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.429131 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.433292 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.485376 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.548622 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.628940 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.657539 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.889087 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.901894 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 14:20:04 crc kubenswrapper[4809]: I0226 14:20:04.961075 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.118156 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.126743 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.142401 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.153947 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.162160 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.181213 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.214899 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.220156 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.282451 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.332262 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.493035 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.580295 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.644632 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.647410 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.664003 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.701121 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.701182 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.713547 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.737352 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.773026 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.779173 4809 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.779358 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.812449 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.817492 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 14:20:05 crc kubenswrapper[4809]: I0226 14:20:05.913382 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.059546 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.149880 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.266821 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.342777 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.377732 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.416322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.486582 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.491799 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.603941 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.613300 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.696642 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.762231 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.824862 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.860829 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.944149 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 14:20:06 crc kubenswrapper[4809]: I0226 14:20:06.950527 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.043313 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.113607 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.119817 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.212752 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.277599 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.302653 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.305368 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.356805 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.391072 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.462243 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.465093 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.519303 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.565111 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.566516 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.588355 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.610353 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.650461 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.659440 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.676829 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.945078 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 14:20:07 crc kubenswrapper[4809]: I0226 14:20:07.945710 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.010937 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.026932 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.054603 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.094137 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.222633 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.257522 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.258981 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.272281 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.304233 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.336291 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.366385 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.409304 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.456929 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.461860 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.498541 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.572731 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.644025 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.722200 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.843118 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.882024 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 14:20:08 crc kubenswrapper[4809]: I0226 14:20:08.918523 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.013925 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.088672 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.122317 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.156787 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.161214 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.161654 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.277515 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.360704 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.371895 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.386229 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.448701 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.518156 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.704871 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.733259 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.748455 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.834920 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.900257 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.903915 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.930093 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.946743 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 14:20:09 crc kubenswrapper[4809]: I0226 14:20:09.980783 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.333050 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.347771 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.391936 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.424609 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.459289 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 14:20:10 crc kubenswrapper[4809]: I0226 14:20:10.571737 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.073471 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.088568 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.192501 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.248576 4809 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.299523 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.360873 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.417136 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.568556 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.671532 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.708958 4809 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.773007 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 14:20:11 crc kubenswrapper[4809]: I0226 14:20:11.871685 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.027812 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.125704 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.138145 4809 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.183515 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.321512 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.463600 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.498779 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.504976 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 14:20:12 crc kubenswrapper[4809]: I0226 14:20:12.720401 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 14:20:13 crc kubenswrapper[4809]: I0226 14:20:13.384108 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 14:20:13 crc kubenswrapper[4809]: I0226 14:20:13.396656 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 14:20:13 crc kubenswrapper[4809]: I0226 14:20:13.516057 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 14:20:14 crc kubenswrapper[4809]: I0226 14:20:14.764749 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 14:20:15 crc kubenswrapper[4809]: I0226 14:20:15.257179 4809 scope.go:117] "RemoveContainer" containerID="bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6" Feb 26 14:20:15 crc kubenswrapper[4809]: I0226 14:20:15.399488 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/1.log" Feb 26 14:20:16 crc kubenswrapper[4809]: I0226 14:20:16.407139 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/1.log" Feb 26 14:20:16 crc kubenswrapper[4809]: I0226 14:20:16.408470 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerStarted","Data":"c2612778da48322c5afe4299e70fedb540c30def6096dfd37d695f8cd57a9645"} Feb 26 14:20:16 crc kubenswrapper[4809]: I0226 14:20:16.408970 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:20:16 crc kubenswrapper[4809]: I0226 14:20:16.416164 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645321 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55bd48d569-j44gp"] Feb 26 14:20:18 crc kubenswrapper[4809]: E0226 14:20:18.645624 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645643 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" Feb 26 14:20:18 crc kubenswrapper[4809]: E0226 14:20:18.645672 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645682 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 14:20:18 crc kubenswrapper[4809]: E0226 14:20:18.645699 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" containerName="installer" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645710 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" containerName="installer" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645854 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccacc64b-b318-406f-bc8c-26c85b64f18b" containerName="installer" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645870 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="420b577e-f310-4cc8-bc79-a2abcb837bbe" containerName="oauth-openshift" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.645892 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.646495 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.649644 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.649697 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.649639 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.650771 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.650796 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.650877 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.650935 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.650999 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.651052 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.651064 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.651128 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.651196 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.660190 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.664400 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.665724 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55bd48d569-j44gp"] Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.669783 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.680955 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-router-certs\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681011 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681075 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681126 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681156 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681184 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681211 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-audit-policies\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681246 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4334cfa8-d172-4916-81e2-520ee403cb04-audit-dir\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-session\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681322 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-login\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-service-ca\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681390 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwpb\" (UniqueName: \"kubernetes.io/projected/4334cfa8-d172-4916-81e2-520ee403cb04-kube-api-access-prwpb\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-error\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.681448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.782886 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwpb\" (UniqueName: \"kubernetes.io/projected/4334cfa8-d172-4916-81e2-520ee403cb04-kube-api-access-prwpb\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.782940 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-error\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.782973 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-router-certs\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783799 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783823 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783857 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783883 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783898 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783919 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-audit-policies\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783945 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4334cfa8-d172-4916-81e2-520ee403cb04-audit-dir\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783960 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-session\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783983 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-login\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.783999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-service-ca\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.784043 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.784162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4334cfa8-d172-4916-81e2-520ee403cb04-audit-dir\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.784695 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.784808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-audit-policies\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.784990 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-service-ca\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.789205 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-error\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.789212 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-login\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.789486 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.790006 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-session\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.790007 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.791583 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.791958 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.792456 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4334cfa8-d172-4916-81e2-520ee403cb04-v4-0-config-system-router-certs\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.802875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwpb\" (UniqueName: \"kubernetes.io/projected/4334cfa8-d172-4916-81e2-520ee403cb04-kube-api-access-prwpb\") pod \"oauth-openshift-55bd48d569-j44gp\" (UID: \"4334cfa8-d172-4916-81e2-520ee403cb04\") " pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:18 crc kubenswrapper[4809]: I0226 14:20:18.972706 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:19 crc kubenswrapper[4809]: I0226 14:20:19.418275 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55bd48d569-j44gp"] Feb 26 14:20:19 crc kubenswrapper[4809]: W0226 14:20:19.428905 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4334cfa8_d172_4916_81e2_520ee403cb04.slice/crio-459b7ebcc8a04080d1071c67cf209581e05fde67dd51b48aa816e0b16d72b829 WatchSource:0}: Error finding container 459b7ebcc8a04080d1071c67cf209581e05fde67dd51b48aa816e0b16d72b829: Status 404 returned error can't find the container with id 459b7ebcc8a04080d1071c67cf209581e05fde67dd51b48aa816e0b16d72b829 Feb 26 14:20:20 crc kubenswrapper[4809]: I0226 14:20:20.432304 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" event={"ID":"4334cfa8-d172-4916-81e2-520ee403cb04","Type":"ContainerStarted","Data":"5cf1ade45d3ceee6be81edbd0bd8147ab812e281d37d20c963dc66307d8c5067"} Feb 26 14:20:20 crc kubenswrapper[4809]: I0226 14:20:20.432355 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" event={"ID":"4334cfa8-d172-4916-81e2-520ee403cb04","Type":"ContainerStarted","Data":"459b7ebcc8a04080d1071c67cf209581e05fde67dd51b48aa816e0b16d72b829"} Feb 26 14:20:20 crc kubenswrapper[4809]: I0226 14:20:20.432682 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:20 crc kubenswrapper[4809]: I0226 14:20:20.440104 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 14:20:20 crc kubenswrapper[4809]: I0226 14:20:20.472577 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podStartSLOduration=67.472559777 podStartE2EDuration="1m7.472559777s" podCreationTimestamp="2026-02-26 14:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:20:20.454636675 +0000 UTC m=+398.927957198" watchObservedRunningTime="2026-02-26 14:20:20.472559777 +0000 UTC m=+398.945880300" Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.503097 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.508159 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.508676 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.508741 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f" exitCode=137 Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.508774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f"} Feb 26 14:20:32 crc kubenswrapper[4809]: I0226 14:20:32.508809 4809 scope.go:117] "RemoveContainer" containerID="e9d450bb40bdad92111289be02d7bad4b76525cb4814a5ca034302b283c6dee2" Feb 26 14:20:33 crc kubenswrapper[4809]: I0226 14:20:33.516201 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 14:20:33 crc kubenswrapper[4809]: I0226 14:20:33.518286 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 14:20:33 crc kubenswrapper[4809]: I0226 14:20:33.518348 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9fd6731a973ac1ef36d3f9a00dcf2810ddc868e361968a526d4227e6f996fcec"} Feb 26 14:20:41 crc kubenswrapper[4809]: I0226 14:20:41.171282 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:20:41 crc kubenswrapper[4809]: I0226 14:20:41.818494 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:20:41 crc kubenswrapper[4809]: I0226 14:20:41.822304 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:20:42 crc kubenswrapper[4809]: I0226 14:20:42.572099 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.312038 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535260-26swq"] Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.313006 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.315065 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.315214 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.315362 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.321832 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-26swq"] Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.362716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lm9t\" (UniqueName: \"kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t\") pod \"auto-csr-approver-29535260-26swq\" (UID: \"8a84f2a5-2dad-4b84-944f-436bc29e98d5\") " pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.396695 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.397214 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" podUID="311b7185-0675-4fb3-8047-f57c92ad7c1b" containerName="controller-manager" containerID="cri-o://d22f6cd26ccdca628c0714e7ddb4d5304f13f296de22a4f17166b6c584249592" gracePeriod=30 Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.403942 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.404178 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" containerID="cri-o://c2612778da48322c5afe4299e70fedb540c30def6096dfd37d695f8cd57a9645" gracePeriod=30 Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.463840 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lm9t\" (UniqueName: \"kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t\") pod \"auto-csr-approver-29535260-26swq\" (UID: \"8a84f2a5-2dad-4b84-944f-436bc29e98d5\") " pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.481661 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lm9t\" (UniqueName: \"kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t\") pod \"auto-csr-approver-29535260-26swq\" (UID: \"8a84f2a5-2dad-4b84-944f-436bc29e98d5\") " pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.619934 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56879b5cf7-rclqd_31edfd3d-2f2a-4078-81bc-a8455878a528/route-controller-manager/1.log" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.619991 4809 generic.go:334] "Generic (PLEG): container finished" podID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerID="c2612778da48322c5afe4299e70fedb540c30def6096dfd37d695f8cd57a9645" exitCode=0 Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.620159 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerDied","Data":"c2612778da48322c5afe4299e70fedb540c30def6096dfd37d695f8cd57a9645"} Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.620203 4809 scope.go:117] "RemoveContainer" containerID="bd9409c78c7ae92e18d95001151c637402011e3ce7f6a2041de5e511086cc8c6" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.623108 4809 generic.go:334] "Generic (PLEG): container finished" podID="311b7185-0675-4fb3-8047-f57c92ad7c1b" containerID="d22f6cd26ccdca628c0714e7ddb4d5304f13f296de22a4f17166b6c584249592" exitCode=0 Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.623160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" event={"ID":"311b7185-0675-4fb3-8047-f57c92ad7c1b","Type":"ContainerDied","Data":"d22f6cd26ccdca628c0714e7ddb4d5304f13f296de22a4f17166b6c584249592"} Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.628491 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.888260 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:20:49 crc kubenswrapper[4809]: I0226 14:20:49.996181 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.074097 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert\") pod \"31edfd3d-2f2a-4078-81bc-a8455878a528\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.074173 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca\") pod \"31edfd3d-2f2a-4078-81bc-a8455878a528\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.074218 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config\") pod \"31edfd3d-2f2a-4078-81bc-a8455878a528\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.074241 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsp9c\" (UniqueName: \"kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c\") pod \"31edfd3d-2f2a-4078-81bc-a8455878a528\" (UID: \"31edfd3d-2f2a-4078-81bc-a8455878a528\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.075649 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca" (OuterVolumeSpecName: "client-ca") pod "31edfd3d-2f2a-4078-81bc-a8455878a528" (UID: "31edfd3d-2f2a-4078-81bc-a8455878a528"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.076217 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config" (OuterVolumeSpecName: "config") pod "31edfd3d-2f2a-4078-81bc-a8455878a528" (UID: "31edfd3d-2f2a-4078-81bc-a8455878a528"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.078675 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31edfd3d-2f2a-4078-81bc-a8455878a528" (UID: "31edfd3d-2f2a-4078-81bc-a8455878a528"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.079337 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c" (OuterVolumeSpecName: "kube-api-access-lsp9c") pod "31edfd3d-2f2a-4078-81bc-a8455878a528" (UID: "31edfd3d-2f2a-4078-81bc-a8455878a528"). InnerVolumeSpecName "kube-api-access-lsp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.126978 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-26swq"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176483 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config\") pod \"311b7185-0675-4fb3-8047-f57c92ad7c1b\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176553 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca\") pod \"311b7185-0675-4fb3-8047-f57c92ad7c1b\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176576 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles\") pod \"311b7185-0675-4fb3-8047-f57c92ad7c1b\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176622 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert\") pod \"311b7185-0675-4fb3-8047-f57c92ad7c1b\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176641 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztw8c\" (UniqueName: \"kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c\") pod \"311b7185-0675-4fb3-8047-f57c92ad7c1b\" (UID: \"311b7185-0675-4fb3-8047-f57c92ad7c1b\") " Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176826 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31edfd3d-2f2a-4078-81bc-a8455878a528-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176839 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176849 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31edfd3d-2f2a-4078-81bc-a8455878a528-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.176858 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsp9c\" (UniqueName: \"kubernetes.io/projected/31edfd3d-2f2a-4078-81bc-a8455878a528-kube-api-access-lsp9c\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.177307 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca" (OuterVolumeSpecName: "client-ca") pod "311b7185-0675-4fb3-8047-f57c92ad7c1b" (UID: "311b7185-0675-4fb3-8047-f57c92ad7c1b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.177587 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config" (OuterVolumeSpecName: "config") pod "311b7185-0675-4fb3-8047-f57c92ad7c1b" (UID: "311b7185-0675-4fb3-8047-f57c92ad7c1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.177762 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "311b7185-0675-4fb3-8047-f57c92ad7c1b" (UID: "311b7185-0675-4fb3-8047-f57c92ad7c1b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.179868 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c" (OuterVolumeSpecName: "kube-api-access-ztw8c") pod "311b7185-0675-4fb3-8047-f57c92ad7c1b" (UID: "311b7185-0675-4fb3-8047-f57c92ad7c1b"). InnerVolumeSpecName "kube-api-access-ztw8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.180809 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "311b7185-0675-4fb3-8047-f57c92ad7c1b" (UID: "311b7185-0675-4fb3-8047-f57c92ad7c1b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.277811 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.277844 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.277856 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/311b7185-0675-4fb3-8047-f57c92ad7c1b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.277868 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztw8c\" (UniqueName: \"kubernetes.io/projected/311b7185-0675-4fb3-8047-f57c92ad7c1b-kube-api-access-ztw8c\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.277880 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311b7185-0675-4fb3-8047-f57c92ad7c1b-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.630323 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" event={"ID":"311b7185-0675-4fb3-8047-f57c92ad7c1b","Type":"ContainerDied","Data":"fc717865ae26a2bb64a722093b8e8f35818df723a6d1a81502b610c8f0efd699"} Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.630365 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55ccd978bc-npnhf" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.630385 4809 scope.go:117] "RemoveContainer" containerID="d22f6cd26ccdca628c0714e7ddb4d5304f13f296de22a4f17166b6c584249592" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.633395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" event={"ID":"31edfd3d-2f2a-4078-81bc-a8455878a528","Type":"ContainerDied","Data":"4d521683c5951075a540f759c73040b8925cf52318484072b9daf13cb0e0a14a"} Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.633459 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.635541 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-26swq" event={"ID":"8a84f2a5-2dad-4b84-944f-436bc29e98d5","Type":"ContainerStarted","Data":"70be0034485f4bc7c4a547fd5f5b3ff158a7cfc70144f01782a0a53ba54ef6cf"} Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.676074 4809 scope.go:117] "RemoveContainer" containerID="c2612778da48322c5afe4299e70fedb540c30def6096dfd37d695f8cd57a9645" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682262 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:20:50 crc kubenswrapper[4809]: E0226 14:20:50.682626 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682648 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: E0226 14:20:50.682661 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311b7185-0675-4fb3-8047-f57c92ad7c1b" containerName="controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682668 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="311b7185-0675-4fb3-8047-f57c92ad7c1b" containerName="controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: E0226 14:20:50.682681 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682688 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682809 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="311b7185-0675-4fb3-8047-f57c92ad7c1b" containerName="controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682822 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.682831 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.683289 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.687666 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.687760 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.687978 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.688158 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.688771 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.688853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.695864 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:20:50 crc kubenswrapper[4809]: E0226 14:20:50.697134 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.697164 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.697288 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" containerName="route-controller-manager" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.697778 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.700950 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.701171 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.701497 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.702186 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.703219 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.707361 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.708975 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.728250 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.733097 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56879b5cf7-rclqd"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.738125 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.742358 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.747374 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.752232 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55ccd978bc-npnhf"] Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.785188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcg7z\" (UniqueName: \"kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.785231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.785265 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.785280 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.885992 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886097 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbkb\" (UniqueName: \"kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886469 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcg7z\" (UniqueName: \"kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886657 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886679 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.886702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.887726 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.888447 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.898950 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.902512 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcg7z\" (UniqueName: \"kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z\") pod \"route-controller-manager-65d69d64d8-d6r8g\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.987534 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.987580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.987611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbkb\" (UniqueName: \"kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.987653 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.987678 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.988629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.988853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.988974 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:50 crc kubenswrapper[4809]: I0226 14:20:50.991502 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.005162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbkb\" (UniqueName: \"kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb\") pod \"controller-manager-77d889f6fd-pjv8r\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.015176 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.029008 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.296823 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:20:51 crc kubenswrapper[4809]: W0226 14:20:51.299260 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74fbc4a_e7b2_4475_bdea_1302cc0dae91.slice/crio-31fff4bdac4a2bc0f595bba8153657fbb3232bd9a1cca51eca362d355eaf3288 WatchSource:0}: Error finding container 31fff4bdac4a2bc0f595bba8153657fbb3232bd9a1cca51eca362d355eaf3288: Status 404 returned error can't find the container with id 31fff4bdac4a2bc0f595bba8153657fbb3232bd9a1cca51eca362d355eaf3288 Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.559661 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:20:51 crc kubenswrapper[4809]: W0226 14:20:51.568622 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6092a0cc_bdbc_4070_95d5_b23bf3035e7c.slice/crio-5ca2b0bd10023003aadb6e8a5d3ad6b4ec8197d8b2f8070c013eddb79a804223 WatchSource:0}: Error finding container 5ca2b0bd10023003aadb6e8a5d3ad6b4ec8197d8b2f8070c013eddb79a804223: Status 404 returned error can't find the container with id 5ca2b0bd10023003aadb6e8a5d3ad6b4ec8197d8b2f8070c013eddb79a804223 Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.641958 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" event={"ID":"6092a0cc-bdbc-4070-95d5-b23bf3035e7c","Type":"ContainerStarted","Data":"5ca2b0bd10023003aadb6e8a5d3ad6b4ec8197d8b2f8070c013eddb79a804223"} Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.644791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-26swq" event={"ID":"8a84f2a5-2dad-4b84-944f-436bc29e98d5","Type":"ContainerStarted","Data":"49c41c2c5959e8c577af5777d675af4c277a67b777a1025e52327720e5a7bf21"} Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.646402 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" event={"ID":"c74fbc4a-e7b2-4475-bdea-1302cc0dae91","Type":"ContainerStarted","Data":"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21"} Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.646441 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" event={"ID":"c74fbc4a-e7b2-4475-bdea-1302cc0dae91","Type":"ContainerStarted","Data":"31fff4bdac4a2bc0f595bba8153657fbb3232bd9a1cca51eca362d355eaf3288"} Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.646624 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.682595 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535260-26swq" podStartSLOduration=1.649257948 podStartE2EDuration="2.682578823s" podCreationTimestamp="2026-02-26 14:20:49 +0000 UTC" firstStartedPulling="2026-02-26 14:20:50.125616484 +0000 UTC m=+428.598937007" lastFinishedPulling="2026-02-26 14:20:51.158937359 +0000 UTC m=+429.632257882" observedRunningTime="2026-02-26 14:20:51.663228687 +0000 UTC m=+430.136549210" watchObservedRunningTime="2026-02-26 14:20:51.682578823 +0000 UTC m=+430.155899346" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.683710 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" podStartSLOduration=2.683705347 podStartE2EDuration="2.683705347s" podCreationTimestamp="2026-02-26 14:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:20:51.679989214 +0000 UTC m=+430.153309767" watchObservedRunningTime="2026-02-26 14:20:51.683705347 +0000 UTC m=+430.157025870" Feb 26 14:20:51 crc kubenswrapper[4809]: I0226 14:20:51.876817 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.264535 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311b7185-0675-4fb3-8047-f57c92ad7c1b" path="/var/lib/kubelet/pods/311b7185-0675-4fb3-8047-f57c92ad7c1b/volumes" Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.265449 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31edfd3d-2f2a-4078-81bc-a8455878a528" path="/var/lib/kubelet/pods/31edfd3d-2f2a-4078-81bc-a8455878a528/volumes" Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.656059 4809 generic.go:334] "Generic (PLEG): container finished" podID="8a84f2a5-2dad-4b84-944f-436bc29e98d5" containerID="49c41c2c5959e8c577af5777d675af4c277a67b777a1025e52327720e5a7bf21" exitCode=0 Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.656115 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-26swq" event={"ID":"8a84f2a5-2dad-4b84-944f-436bc29e98d5","Type":"ContainerDied","Data":"49c41c2c5959e8c577af5777d675af4c277a67b777a1025e52327720e5a7bf21"} Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.657831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" event={"ID":"6092a0cc-bdbc-4070-95d5-b23bf3035e7c","Type":"ContainerStarted","Data":"442c89a671e4d3e150ad5b16b50a89e76f3d5346300a735347af8997f031d8ff"} Feb 26 14:20:52 crc kubenswrapper[4809]: I0226 14:20:52.683509 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" podStartSLOduration=3.683491647 podStartE2EDuration="3.683491647s" podCreationTimestamp="2026-02-26 14:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:20:52.681883828 +0000 UTC m=+431.155204351" watchObservedRunningTime="2026-02-26 14:20:52.683491647 +0000 UTC m=+431.156812170" Feb 26 14:20:53 crc kubenswrapper[4809]: I0226 14:20:53.662939 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:53 crc kubenswrapper[4809]: I0226 14:20:53.669735 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:20:53 crc kubenswrapper[4809]: I0226 14:20:53.959535 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.027185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lm9t\" (UniqueName: \"kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t\") pod \"8a84f2a5-2dad-4b84-944f-436bc29e98d5\" (UID: \"8a84f2a5-2dad-4b84-944f-436bc29e98d5\") " Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.033222 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t" (OuterVolumeSpecName: "kube-api-access-6lm9t") pod "8a84f2a5-2dad-4b84-944f-436bc29e98d5" (UID: "8a84f2a5-2dad-4b84-944f-436bc29e98d5"). InnerVolumeSpecName "kube-api-access-6lm9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.128893 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lm9t\" (UniqueName: \"kubernetes.io/projected/8a84f2a5-2dad-4b84-944f-436bc29e98d5-kube-api-access-6lm9t\") on node \"crc\" DevicePath \"\"" Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.674277 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535260-26swq" Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.674339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535260-26swq" event={"ID":"8a84f2a5-2dad-4b84-944f-436bc29e98d5","Type":"ContainerDied","Data":"70be0034485f4bc7c4a547fd5f5b3ff158a7cfc70144f01782a0a53ba54ef6cf"} Feb 26 14:20:54 crc kubenswrapper[4809]: I0226 14:20:54.674370 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70be0034485f4bc7c4a547fd5f5b3ff158a7cfc70144f01782a0a53ba54ef6cf" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.641542 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxr8"] Feb 26 14:20:59 crc kubenswrapper[4809]: E0226 14:20:59.642116 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a84f2a5-2dad-4b84-944f-436bc29e98d5" containerName="oc" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.642132 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a84f2a5-2dad-4b84-944f-436bc29e98d5" containerName="oc" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.642259 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a84f2a5-2dad-4b84-944f-436bc29e98d5" containerName="oc" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.642730 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.661430 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxr8"] Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802454 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-certificates\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802535 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802658 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-trusted-ca\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8kb\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-kube-api-access-8l8kb\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-tls\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.802973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.822507 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904111 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-certificates\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904176 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904204 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-trusted-ca\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8kb\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-kube-api-access-8l8kb\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.905208 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.905451 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-certificates\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.904274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-tls\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.905732 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.906004 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-trusted-ca\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.916733 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-registry-tls\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.916842 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.925049 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8kb\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-kube-api-access-8l8kb\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.937974 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxr8\" (UID: \"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:20:59 crc kubenswrapper[4809]: I0226 14:20:59.957137 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:21:00 crc kubenswrapper[4809]: I0226 14:21:00.359633 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxr8"] Feb 26 14:21:00 crc kubenswrapper[4809]: I0226 14:21:00.724721 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" event={"ID":"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9","Type":"ContainerStarted","Data":"6583d1c115a21567538daef6b90fa6a91411eddcd6c6410a0e91fa31915c927b"} Feb 26 14:21:00 crc kubenswrapper[4809]: I0226 14:21:00.724778 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" event={"ID":"10cc24e3-e3c6-44d9-a1fb-0f7813c25fb9","Type":"ContainerStarted","Data":"abe17ab0525cfe175c2923bc9d549ac18d63825e6b83ff71127bd915a95a3173"} Feb 26 14:21:00 crc kubenswrapper[4809]: I0226 14:21:00.724946 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:21:00 crc kubenswrapper[4809]: I0226 14:21:00.751004 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" podStartSLOduration=1.750986518 podStartE2EDuration="1.750986518s" podCreationTimestamp="2026-02-26 14:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:21:00.749574055 +0000 UTC m=+439.222894598" watchObservedRunningTime="2026-02-26 14:21:00.750986518 +0000 UTC m=+439.224307041" Feb 26 14:21:11 crc kubenswrapper[4809]: I0226 14:21:11.794118 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:21:11 crc kubenswrapper[4809]: I0226 14:21:11.794557 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:21:19 crc kubenswrapper[4809]: I0226 14:21:19.964190 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tsxr8" Feb 26 14:21:20 crc kubenswrapper[4809]: I0226 14:21:20.034556 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.841549 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.842293 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdrnc" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="registry-server" containerID="cri-o://0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd" gracePeriod=30 Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.853733 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.854319 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-45bqj" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="registry-server" containerID="cri-o://0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb" gracePeriod=30 Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.861006 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.861622 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" containerID="cri-o://a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540" gracePeriod=30 Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.869924 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.870211 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2kqz" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="registry-server" containerID="cri-o://bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200" gracePeriod=30 Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.886283 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cn6jt"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.887345 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.900237 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.900531 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jwxvj" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="registry-server" containerID="cri-o://ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57" gracePeriod=30 Feb 26 14:21:30 crc kubenswrapper[4809]: I0226 14:21:30.903649 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cn6jt"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.043451 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.043511 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.043533 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pk5\" (UniqueName: \"kubernetes.io/projected/75ed42a0-23bb-4422-bdde-87edffef1c8a-kube-api-access-b9pk5\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.145962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.146044 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.146062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pk5\" (UniqueName: \"kubernetes.io/projected/75ed42a0-23bb-4422-bdde-87edffef1c8a-kube-api-access-b9pk5\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.147597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.155438 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75ed42a0-23bb-4422-bdde-87edffef1c8a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.163382 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pk5\" (UniqueName: \"kubernetes.io/projected/75ed42a0-23bb-4422-bdde-87edffef1c8a-kube-api-access-b9pk5\") pod \"marketplace-operator-79b997595-cn6jt\" (UID: \"75ed42a0-23bb-4422-bdde-87edffef1c8a\") " pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.291841 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.300926 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.383924 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.387852 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.396178 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.411827 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.453352 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities\") pod \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.453472 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content\") pod \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.453505 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r4d2\" (UniqueName: \"kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2\") pod \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\" (UID: \"2328fe45-3fdc-4f65-9377-3e43e72b4b22\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.454249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities" (OuterVolumeSpecName: "utilities") pod "2328fe45-3fdc-4f65-9377-3e43e72b4b22" (UID: "2328fe45-3fdc-4f65-9377-3e43e72b4b22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.460593 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2" (OuterVolumeSpecName: "kube-api-access-8r4d2") pod "2328fe45-3fdc-4f65-9377-3e43e72b4b22" (UID: "2328fe45-3fdc-4f65-9377-3e43e72b4b22"). InnerVolumeSpecName "kube-api-access-8r4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.519503 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2328fe45-3fdc-4f65-9377-3e43e72b4b22" (UID: "2328fe45-3fdc-4f65-9377-3e43e72b4b22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555006 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6rzz\" (UniqueName: \"kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz\") pod \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555073 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics\") pod \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555113 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpjmn\" (UniqueName: \"kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn\") pod \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555134 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content\") pod \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555198 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities\") pod \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content\") pod \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.555898 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities" (OuterVolumeSpecName: "utilities") pod "a0cd2a65-4aaf-4322-8e24-ca1aa935c510" (UID: "a0cd2a65-4aaf-4322-8e24-ca1aa935c510"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556620 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca\") pod \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556672 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities\") pod \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556704 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66sjq\" (UniqueName: \"kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq\") pod \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\" (UID: \"f9f47aa1-3b5e-4e70-b27f-88ff985a0104\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556747 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhrh\" (UniqueName: \"kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh\") pod \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\" (UID: \"27674e3b-1fb9-4e3a-83d9-2b77ccd40571\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556766 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities\") pod \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\" (UID: \"2312cf07-fe31-4bbd-97ec-b330a5edbe87\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556787 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content\") pod \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\" (UID: \"a0cd2a65-4aaf-4322-8e24-ca1aa935c510\") " Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556958 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "27674e3b-1fb9-4e3a-83d9-2b77ccd40571" (UID: "27674e3b-1fb9-4e3a-83d9-2b77ccd40571"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.556989 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557000 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r4d2\" (UniqueName: \"kubernetes.io/projected/2328fe45-3fdc-4f65-9377-3e43e72b4b22-kube-api-access-8r4d2\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557027 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557036 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2328fe45-3fdc-4f65-9377-3e43e72b4b22-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557781 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities" (OuterVolumeSpecName: "utilities") pod "2312cf07-fe31-4bbd-97ec-b330a5edbe87" (UID: "2312cf07-fe31-4bbd-97ec-b330a5edbe87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557834 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities" (OuterVolumeSpecName: "utilities") pod "f9f47aa1-3b5e-4e70-b27f-88ff985a0104" (UID: "f9f47aa1-3b5e-4e70-b27f-88ff985a0104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.557961 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz" (OuterVolumeSpecName: "kube-api-access-q6rzz") pod "a0cd2a65-4aaf-4322-8e24-ca1aa935c510" (UID: "a0cd2a65-4aaf-4322-8e24-ca1aa935c510"). InnerVolumeSpecName "kube-api-access-q6rzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.560604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq" (OuterVolumeSpecName: "kube-api-access-66sjq") pod "f9f47aa1-3b5e-4e70-b27f-88ff985a0104" (UID: "f9f47aa1-3b5e-4e70-b27f-88ff985a0104"). InnerVolumeSpecName "kube-api-access-66sjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.563124 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh" (OuterVolumeSpecName: "kube-api-access-mxhrh") pod "27674e3b-1fb9-4e3a-83d9-2b77ccd40571" (UID: "27674e3b-1fb9-4e3a-83d9-2b77ccd40571"). InnerVolumeSpecName "kube-api-access-mxhrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.558197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "27674e3b-1fb9-4e3a-83d9-2b77ccd40571" (UID: "27674e3b-1fb9-4e3a-83d9-2b77ccd40571"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.576126 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn" (OuterVolumeSpecName: "kube-api-access-gpjmn") pod "2312cf07-fe31-4bbd-97ec-b330a5edbe87" (UID: "2312cf07-fe31-4bbd-97ec-b330a5edbe87"). InnerVolumeSpecName "kube-api-access-gpjmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.578226 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9f47aa1-3b5e-4e70-b27f-88ff985a0104" (UID: "f9f47aa1-3b5e-4e70-b27f-88ff985a0104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.611862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2312cf07-fe31-4bbd-97ec-b330a5edbe87" (UID: "2312cf07-fe31-4bbd-97ec-b330a5edbe87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658073 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658122 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658145 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658163 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66sjq\" (UniqueName: \"kubernetes.io/projected/f9f47aa1-3b5e-4e70-b27f-88ff985a0104-kube-api-access-66sjq\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658178 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhrh\" (UniqueName: \"kubernetes.io/projected/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-kube-api-access-mxhrh\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658195 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658209 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6rzz\" (UniqueName: \"kubernetes.io/projected/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-kube-api-access-q6rzz\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658223 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27674e3b-1fb9-4e3a-83d9-2b77ccd40571-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658240 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpjmn\" (UniqueName: \"kubernetes.io/projected/2312cf07-fe31-4bbd-97ec-b330a5edbe87-kube-api-access-gpjmn\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.658259 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2312cf07-fe31-4bbd-97ec-b330a5edbe87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.683120 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0cd2a65-4aaf-4322-8e24-ca1aa935c510" (UID: "a0cd2a65-4aaf-4322-8e24-ca1aa935c510"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.752617 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cn6jt"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.759923 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd2a65-4aaf-4322-8e24-ca1aa935c510-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:31 crc kubenswrapper[4809]: W0226 14:21:31.761690 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75ed42a0_23bb_4422_bdde_87edffef1c8a.slice/crio-3639baf713bdce5c0ca43df23d05497c5f223db056647deca4d1bae9bea3592f WatchSource:0}: Error finding container 3639baf713bdce5c0ca43df23d05497c5f223db056647deca4d1bae9bea3592f: Status 404 returned error can't find the container with id 3639baf713bdce5c0ca43df23d05497c5f223db056647deca4d1bae9bea3592f Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.891273 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerID="bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200" exitCode=0 Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.891381 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2kqz" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.891403 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerDied","Data":"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.891961 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2kqz" event={"ID":"f9f47aa1-3b5e-4e70-b27f-88ff985a0104","Type":"ContainerDied","Data":"0cf592e4b62bfb1b58602844a0bb9e785c15e45a5b70c97ef0c086a5e3d37851"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.891984 4809 scope.go:117] "RemoveContainer" containerID="bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.894523 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" event={"ID":"75ed42a0-23bb-4422-bdde-87edffef1c8a","Type":"ContainerStarted","Data":"3639baf713bdce5c0ca43df23d05497c5f223db056647deca4d1bae9bea3592f"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.897990 4809 generic.go:334] "Generic (PLEG): container finished" podID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerID="0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd" exitCode=0 Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.898030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerDied","Data":"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.898072 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrnc" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.898094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrnc" event={"ID":"2312cf07-fe31-4bbd-97ec-b330a5edbe87","Type":"ContainerDied","Data":"1d3cc1c88c1b82e4ecc40d135000de399fd816a02a9d7088f79505325006c20b"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.899504 4809 generic.go:334] "Generic (PLEG): container finished" podID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerID="a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540" exitCode=0 Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.899554 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.899556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" event={"ID":"27674e3b-1fb9-4e3a-83d9-2b77ccd40571","Type":"ContainerDied","Data":"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.899584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v58kd" event={"ID":"27674e3b-1fb9-4e3a-83d9-2b77ccd40571","Type":"ContainerDied","Data":"69cb1271f674d26f8a00e561cd3563d9a92323f9fb827ab1fedb084eeffa86cd"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.902547 4809 generic.go:334] "Generic (PLEG): container finished" podID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerID="0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb" exitCode=0 Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.902608 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerDied","Data":"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.902634 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bqj" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.902636 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bqj" event={"ID":"2328fe45-3fdc-4f65-9377-3e43e72b4b22","Type":"ContainerDied","Data":"3880eff705a607e223d9160d08d7c2739df3920d026c3ed2dd79a283f3f2fee4"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.904887 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerID="ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57" exitCode=0 Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.904918 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerDied","Data":"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.904938 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jwxvj" event={"ID":"a0cd2a65-4aaf-4322-8e24-ca1aa935c510","Type":"ContainerDied","Data":"f2b2f3115ee6936ac1324047ebec1f4851149c3b933f448f644e45fde0e19cac"} Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.905000 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jwxvj" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.916847 4809 scope.go:117] "RemoveContainer" containerID="1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.926472 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.929231 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2kqz"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.949687 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.960500 4809 scope.go:117] "RemoveContainer" containerID="e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.966678 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v58kd"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.974061 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.986693 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-45bqj"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.990007 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.991091 4809 scope.go:117] "RemoveContainer" containerID="bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200" Feb 26 14:21:31 crc kubenswrapper[4809]: E0226 14:21:31.991562 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200\": container with ID starting with bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200 not found: ID does not exist" containerID="bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.991593 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200"} err="failed to get container status \"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200\": rpc error: code = NotFound desc = could not find container \"bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200\": container with ID starting with bc3cd82dd407633b55184dc6532a56e903dc3d87ca3c3510b35463226099a200 not found: ID does not exist" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.991636 4809 scope.go:117] "RemoveContainer" containerID="1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757" Feb 26 14:21:31 crc kubenswrapper[4809]: E0226 14:21:31.991958 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757\": container with ID starting with 1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757 not found: ID does not exist" containerID="1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.991989 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757"} err="failed to get container status \"1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757\": rpc error: code = NotFound desc = could not find container \"1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757\": container with ID starting with 1a7229c8b7483aa3cae2e2f3458174437f8d6e6769ecae2b2d17f28c9b5b8757 not found: ID does not exist" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.992030 4809 scope.go:117] "RemoveContainer" containerID="e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb" Feb 26 14:21:31 crc kubenswrapper[4809]: E0226 14:21:31.992258 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb\": container with ID starting with e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb not found: ID does not exist" containerID="e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.992273 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb"} err="failed to get container status \"e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb\": rpc error: code = NotFound desc = could not find container \"e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb\": container with ID starting with e594c1123b7a3a7aad25ef3776a6e1c38e6d5cf579801f983ea812585d5dd5bb not found: ID does not exist" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.992285 4809 scope.go:117] "RemoveContainer" containerID="0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd" Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.992970 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdrnc"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.995512 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:21:31 crc kubenswrapper[4809]: I0226 14:21:31.997833 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jwxvj"] Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.008136 4809 scope.go:117] "RemoveContainer" containerID="2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.047104 4809 scope.go:117] "RemoveContainer" containerID="473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.068071 4809 scope.go:117] "RemoveContainer" containerID="0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.068533 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd\": container with ID starting with 0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd not found: ID does not exist" containerID="0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.068584 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd"} err="failed to get container status \"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd\": rpc error: code = NotFound desc = could not find container \"0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd\": container with ID starting with 0cdb082a4f113a7abf0e049698db890428712c59c0720beb446c6f60017e0edd not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.068612 4809 scope.go:117] "RemoveContainer" containerID="2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.068953 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031\": container with ID starting with 2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031 not found: ID does not exist" containerID="2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.068982 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031"} err="failed to get container status \"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031\": rpc error: code = NotFound desc = could not find container \"2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031\": container with ID starting with 2ff2ad3c9ae6c880fdce3cbae3bb5aa099d78254471166a638dbd582b00ce031 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.069002 4809 scope.go:117] "RemoveContainer" containerID="473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.069344 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3\": container with ID starting with 473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3 not found: ID does not exist" containerID="473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.069372 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3"} err="failed to get container status \"473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3\": rpc error: code = NotFound desc = could not find container \"473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3\": container with ID starting with 473e3c1ff404f53d802da175f38ef5660b6672ce4e99d626983e4c7b4bc1e3f3 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.069387 4809 scope.go:117] "RemoveContainer" containerID="a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.084081 4809 scope.go:117] "RemoveContainer" containerID="a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.084593 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540\": container with ID starting with a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540 not found: ID does not exist" containerID="a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.084620 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540"} err="failed to get container status \"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540\": rpc error: code = NotFound desc = could not find container \"a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540\": container with ID starting with a16d4295eeda5e1ec01f0de60cf7b5d7130ec5d7443b782cd5ffef228ef73540 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.084640 4809 scope.go:117] "RemoveContainer" containerID="0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.098050 4809 scope.go:117] "RemoveContainer" containerID="7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.117283 4809 scope.go:117] "RemoveContainer" containerID="d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.151370 4809 scope.go:117] "RemoveContainer" containerID="0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.153533 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb\": container with ID starting with 0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb not found: ID does not exist" containerID="0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.153582 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb"} err="failed to get container status \"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb\": rpc error: code = NotFound desc = could not find container \"0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb\": container with ID starting with 0eefba4c8d334c46d5a2b0a28101709373f6e51d5399eeff3b2b73ed3c041ccb not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.153605 4809 scope.go:117] "RemoveContainer" containerID="7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.154092 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2\": container with ID starting with 7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2 not found: ID does not exist" containerID="7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.154127 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2"} err="failed to get container status \"7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2\": rpc error: code = NotFound desc = could not find container \"7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2\": container with ID starting with 7b066f5787e967eab8907b21baf283a50036214bf660d856d9e08d6b825360f2 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.154141 4809 scope.go:117] "RemoveContainer" containerID="d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.154548 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9\": container with ID starting with d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9 not found: ID does not exist" containerID="d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.154572 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9"} err="failed to get container status \"d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9\": rpc error: code = NotFound desc = could not find container \"d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9\": container with ID starting with d1971f3be87cfa5dbabc58348afb52adfcf504b1cdf807d0207c882911ea01e9 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.154585 4809 scope.go:117] "RemoveContainer" containerID="ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.175849 4809 scope.go:117] "RemoveContainer" containerID="4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.194235 4809 scope.go:117] "RemoveContainer" containerID="c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.211542 4809 scope.go:117] "RemoveContainer" containerID="ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.212067 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57\": container with ID starting with ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57 not found: ID does not exist" containerID="ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.212094 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57"} err="failed to get container status \"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57\": rpc error: code = NotFound desc = could not find container \"ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57\": container with ID starting with ef1f47d94106b950d8292c1b2855221b895abf0c148b8c07ae352e21b437de57 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.212124 4809 scope.go:117] "RemoveContainer" containerID="4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.212425 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507\": container with ID starting with 4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507 not found: ID does not exist" containerID="4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.212456 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507"} err="failed to get container status \"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507\": rpc error: code = NotFound desc = could not find container \"4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507\": container with ID starting with 4fed0d1c555b43fda18d215ba36c4f0762841515f7c5105f49618e826e49f507 not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.212474 4809 scope.go:117] "RemoveContainer" containerID="c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a" Feb 26 14:21:32 crc kubenswrapper[4809]: E0226 14:21:32.212740 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a\": container with ID starting with c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a not found: ID does not exist" containerID="c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.212770 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a"} err="failed to get container status \"c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a\": rpc error: code = NotFound desc = could not find container \"c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a\": container with ID starting with c65657fa777f6cd43352cd635359d078c05021aef686543f15cdc133aae25c1a not found: ID does not exist" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.264799 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" path="/var/lib/kubelet/pods/2312cf07-fe31-4bbd-97ec-b330a5edbe87/volumes" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.265542 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" path="/var/lib/kubelet/pods/2328fe45-3fdc-4f65-9377-3e43e72b4b22/volumes" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.266207 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" path="/var/lib/kubelet/pods/27674e3b-1fb9-4e3a-83d9-2b77ccd40571/volumes" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.267090 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" path="/var/lib/kubelet/pods/a0cd2a65-4aaf-4322-8e24-ca1aa935c510/volumes" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.267628 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" path="/var/lib/kubelet/pods/f9f47aa1-3b5e-4e70-b27f-88ff985a0104/volumes" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.914038 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" event={"ID":"75ed42a0-23bb-4422-bdde-87edffef1c8a","Type":"ContainerStarted","Data":"c32996f6577e4446ebf66b6b2e3d66b2bc52c99caf8e7b19450e5acda47162ba"} Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.914412 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.917179 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" Feb 26 14:21:32 crc kubenswrapper[4809]: I0226 14:21:32.934586 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" podStartSLOduration=2.9345677390000002 podStartE2EDuration="2.934567739s" podCreationTimestamp="2026-02-26 14:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:21:32.934074764 +0000 UTC m=+471.407395297" watchObservedRunningTime="2026-02-26 14:21:32.934567739 +0000 UTC m=+471.407888262" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177330 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2mm4b"] Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177579 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177595 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177606 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177614 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177628 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177636 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177646 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177654 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177666 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177674 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177688 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177696 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177707 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177713 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177727 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177735 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177751 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177758 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177771 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177780 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="extract-utilities" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177791 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177800 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177811 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177818 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="extract-content" Feb 26 14:21:33 crc kubenswrapper[4809]: E0226 14:21:33.177831 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177840 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177949 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2328fe45-3fdc-4f65-9377-3e43e72b4b22" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177964 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="27674e3b-1fb9-4e3a-83d9-2b77ccd40571" containerName="marketplace-operator" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177973 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cd2a65-4aaf-4322-8e24-ca1aa935c510" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.177987 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f47aa1-3b5e-4e70-b27f-88ff985a0104" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.178000 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2312cf07-fe31-4bbd-97ec-b330a5edbe87" containerName="registry-server" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.178875 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.180754 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.188191 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mm4b"] Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.277676 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-utilities\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.277738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj69b\" (UniqueName: \"kubernetes.io/projected/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-kube-api-access-jj69b\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.277785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-catalog-content\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.379249 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-utilities\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.379570 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj69b\" (UniqueName: \"kubernetes.io/projected/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-kube-api-access-jj69b\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.379686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-catalog-content\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.380149 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-utilities\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.381174 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-catalog-content\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.403052 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj69b\" (UniqueName: \"kubernetes.io/projected/45178ad4-29b4-4221-ab5f-8d2c6a9a92d2-kube-api-access-jj69b\") pod \"redhat-marketplace-2mm4b\" (UID: \"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2\") " pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.500167 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.696155 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mm4b"] Feb 26 14:21:33 crc kubenswrapper[4809]: W0226 14:21:33.698689 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45178ad4_29b4_4221_ab5f_8d2c6a9a92d2.slice/crio-7a49c1d5c385e82e3e874ba7a7394bf08f958e5bd8c4478e0fe8bc7d57cb43e3 WatchSource:0}: Error finding container 7a49c1d5c385e82e3e874ba7a7394bf08f958e5bd8c4478e0fe8bc7d57cb43e3: Status 404 returned error can't find the container with id 7a49c1d5c385e82e3e874ba7a7394bf08f958e5bd8c4478e0fe8bc7d57cb43e3 Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.783973 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.785414 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.787806 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.794205 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.888520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.888579 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9vg\" (UniqueName: \"kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.888653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.926307 4809 generic.go:334] "Generic (PLEG): container finished" podID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerID="a293c3988e767448cd387b4129b649782a572c21ee4f9bbaab68a7bd48a4a7e4" exitCode=0 Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.926394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mm4b" event={"ID":"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2","Type":"ContainerDied","Data":"a293c3988e767448cd387b4129b649782a572c21ee4f9bbaab68a7bd48a4a7e4"} Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.926419 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mm4b" event={"ID":"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2","Type":"ContainerStarted","Data":"7a49c1d5c385e82e3e874ba7a7394bf08f958e5bd8c4478e0fe8bc7d57cb43e3"} Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.989677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.989747 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c9vg\" (UniqueName: \"kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.989792 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.990256 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:33 crc kubenswrapper[4809]: I0226 14:21:33.990305 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.009653 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c9vg\" (UniqueName: \"kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg\") pod \"certified-operators-9nt9t\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.111434 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.332381 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.936197 4809 generic.go:334] "Generic (PLEG): container finished" podID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerID="702799a5674cf93832e8a9e68ac5b3407f140c8daa635011a57253842b538ec8" exitCode=0 Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.936320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerDied","Data":"702799a5674cf93832e8a9e68ac5b3407f140c8daa635011a57253842b538ec8"} Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.936713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerStarted","Data":"38e884104c484c2f93b6a680ec0ec40b27525ae78fd80c51add1156a1face721"} Feb 26 14:21:34 crc kubenswrapper[4809]: I0226 14:21:34.939652 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mm4b" event={"ID":"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2","Type":"ContainerStarted","Data":"18eddfeee0a4f9db4f56032d9655818211fc07921662d80f12e6fcb57ec39373"} Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.574416 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.575619 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.580556 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.585899 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.720165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkwg\" (UniqueName: \"kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.720266 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.720327 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.821894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkwg\" (UniqueName: \"kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.822245 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.822267 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.822695 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.822854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.847920 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkwg\" (UniqueName: \"kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg\") pod \"redhat-operators-nw6lt\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.888995 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.946273 4809 generic.go:334] "Generic (PLEG): container finished" podID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerID="18eddfeee0a4f9db4f56032d9655818211fc07921662d80f12e6fcb57ec39373" exitCode=0 Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.946336 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mm4b" event={"ID":"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2","Type":"ContainerDied","Data":"18eddfeee0a4f9db4f56032d9655818211fc07921662d80f12e6fcb57ec39373"} Feb 26 14:21:35 crc kubenswrapper[4809]: I0226 14:21:35.952995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerStarted","Data":"3a20fc9d66580aec6dc9da167a444997e614434050761430b9ed79cd635d5290"} Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.173882 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2jjnr"] Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.177472 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.182207 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.185149 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2jjnr"] Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.337028 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-catalog-content\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.337358 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvgc\" (UniqueName: \"kubernetes.io/projected/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-kube-api-access-6vvgc\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.337559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-utilities\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.358571 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:21:36 crc kubenswrapper[4809]: W0226 14:21:36.363515 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3611b884_e396_4776_9a3b_7fb279d90bb9.slice/crio-c5150342f61cb5850f645188b7f222e79945e222250a6a47d1cbc5664c6e781e WatchSource:0}: Error finding container c5150342f61cb5850f645188b7f222e79945e222250a6a47d1cbc5664c6e781e: Status 404 returned error can't find the container with id c5150342f61cb5850f645188b7f222e79945e222250a6a47d1cbc5664c6e781e Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.439180 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-catalog-content\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.439503 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvgc\" (UniqueName: \"kubernetes.io/projected/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-kube-api-access-6vvgc\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.439633 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-utilities\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.439638 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-catalog-content\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.439824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-utilities\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.460122 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvgc\" (UniqueName: \"kubernetes.io/projected/e3b1e666-52f7-42ab-bf72-d47a823ab2fd-kube-api-access-6vvgc\") pod \"community-operators-2jjnr\" (UID: \"e3b1e666-52f7-42ab-bf72-d47a823ab2fd\") " pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.497373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.931200 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2jjnr"] Feb 26 14:21:36 crc kubenswrapper[4809]: W0226 14:21:36.939271 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3b1e666_52f7_42ab_bf72_d47a823ab2fd.slice/crio-ead991d9c8a19a8e62a9496e16904bb66e0c0582c7e4837ad31ea6f696e882f5 WatchSource:0}: Error finding container ead991d9c8a19a8e62a9496e16904bb66e0c0582c7e4837ad31ea6f696e882f5: Status 404 returned error can't find the container with id ead991d9c8a19a8e62a9496e16904bb66e0c0582c7e4837ad31ea6f696e882f5 Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.960077 4809 generic.go:334] "Generic (PLEG): container finished" podID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerID="3a20fc9d66580aec6dc9da167a444997e614434050761430b9ed79cd635d5290" exitCode=0 Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.960152 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerDied","Data":"3a20fc9d66580aec6dc9da167a444997e614434050761430b9ed79cd635d5290"} Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.962219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjnr" event={"ID":"e3b1e666-52f7-42ab-bf72-d47a823ab2fd","Type":"ContainerStarted","Data":"ead991d9c8a19a8e62a9496e16904bb66e0c0582c7e4837ad31ea6f696e882f5"} Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.963827 4809 generic.go:334] "Generic (PLEG): container finished" podID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerID="d40e6cfd04deca30253bcb5a46e18d9582d2986c9d855d26a11b27770dcb59da" exitCode=0 Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.963895 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerDied","Data":"d40e6cfd04deca30253bcb5a46e18d9582d2986c9d855d26a11b27770dcb59da"} Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.963923 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerStarted","Data":"c5150342f61cb5850f645188b7f222e79945e222250a6a47d1cbc5664c6e781e"} Feb 26 14:21:36 crc kubenswrapper[4809]: I0226 14:21:36.968878 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mm4b" event={"ID":"45178ad4-29b4-4221-ab5f-8d2c6a9a92d2","Type":"ContainerStarted","Data":"13d3d8963308431a48dc789b0add6e970b1378ab74534b1d8c5d43a2d9616103"} Feb 26 14:21:37 crc kubenswrapper[4809]: I0226 14:21:37.040644 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2mm4b" podStartSLOduration=1.375242187 podStartE2EDuration="4.040620444s" podCreationTimestamp="2026-02-26 14:21:33 +0000 UTC" firstStartedPulling="2026-02-26 14:21:33.927651636 +0000 UTC m=+472.400972159" lastFinishedPulling="2026-02-26 14:21:36.593029893 +0000 UTC m=+475.066350416" observedRunningTime="2026-02-26 14:21:37.036560292 +0000 UTC m=+475.509880825" watchObservedRunningTime="2026-02-26 14:21:37.040620444 +0000 UTC m=+475.513940977" Feb 26 14:21:37 crc kubenswrapper[4809]: I0226 14:21:37.976884 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerStarted","Data":"1ea9f13f2cdfa3cf44dea40efeb7ee4be4b71ebecc15d355ad6bf613120ac8c9"} Feb 26 14:21:37 crc kubenswrapper[4809]: I0226 14:21:37.978501 4809 generic.go:334] "Generic (PLEG): container finished" podID="e3b1e666-52f7-42ab-bf72-d47a823ab2fd" containerID="3e73313a56dcdd82ca50dd0928d72338fccd139e5671ee1bb54502d4880b8653" exitCode=0 Feb 26 14:21:37 crc kubenswrapper[4809]: I0226 14:21:37.978587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjnr" event={"ID":"e3b1e666-52f7-42ab-bf72-d47a823ab2fd","Type":"ContainerDied","Data":"3e73313a56dcdd82ca50dd0928d72338fccd139e5671ee1bb54502d4880b8653"} Feb 26 14:21:37 crc kubenswrapper[4809]: I0226 14:21:37.986139 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerStarted","Data":"fdeea7b545c15b190a96b05ef8f08b392b672c80169a35e7a730dc79f3c9836e"} Feb 26 14:21:38 crc kubenswrapper[4809]: I0226 14:21:38.003868 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nt9t" podStartSLOduration=2.490110762 podStartE2EDuration="5.003845447s" podCreationTimestamp="2026-02-26 14:21:33 +0000 UTC" firstStartedPulling="2026-02-26 14:21:34.937698517 +0000 UTC m=+473.411019040" lastFinishedPulling="2026-02-26 14:21:37.451433202 +0000 UTC m=+475.924753725" observedRunningTime="2026-02-26 14:21:38.001159446 +0000 UTC m=+476.474479969" watchObservedRunningTime="2026-02-26 14:21:38.003845447 +0000 UTC m=+476.477165960" Feb 26 14:21:38 crc kubenswrapper[4809]: I0226 14:21:38.993153 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjnr" event={"ID":"e3b1e666-52f7-42ab-bf72-d47a823ab2fd","Type":"ContainerStarted","Data":"12fb54279436073917d0392f7356ff81639ab7d87a0d9291eb84805adac49ad3"} Feb 26 14:21:38 crc kubenswrapper[4809]: I0226 14:21:38.995307 4809 generic.go:334] "Generic (PLEG): container finished" podID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerID="fdeea7b545c15b190a96b05ef8f08b392b672c80169a35e7a730dc79f3c9836e" exitCode=0 Feb 26 14:21:38 crc kubenswrapper[4809]: I0226 14:21:38.995395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerDied","Data":"fdeea7b545c15b190a96b05ef8f08b392b672c80169a35e7a730dc79f3c9836e"} Feb 26 14:21:40 crc kubenswrapper[4809]: I0226 14:21:40.001630 4809 generic.go:334] "Generic (PLEG): container finished" podID="e3b1e666-52f7-42ab-bf72-d47a823ab2fd" containerID="12fb54279436073917d0392f7356ff81639ab7d87a0d9291eb84805adac49ad3" exitCode=0 Feb 26 14:21:40 crc kubenswrapper[4809]: I0226 14:21:40.001706 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjnr" event={"ID":"e3b1e666-52f7-42ab-bf72-d47a823ab2fd","Type":"ContainerDied","Data":"12fb54279436073917d0392f7356ff81639ab7d87a0d9291eb84805adac49ad3"} Feb 26 14:21:40 crc kubenswrapper[4809]: I0226 14:21:40.006372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerStarted","Data":"d6769f62d559f40396f585b9baf75e217820395d211a18697f6a90f4e7a80a47"} Feb 26 14:21:40 crc kubenswrapper[4809]: I0226 14:21:40.050914 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nw6lt" podStartSLOduration=2.386686213 podStartE2EDuration="5.050897944s" podCreationTimestamp="2026-02-26 14:21:35 +0000 UTC" firstStartedPulling="2026-02-26 14:21:36.965583073 +0000 UTC m=+475.438903596" lastFinishedPulling="2026-02-26 14:21:39.629794804 +0000 UTC m=+478.103115327" observedRunningTime="2026-02-26 14:21:40.049211753 +0000 UTC m=+478.522532286" watchObservedRunningTime="2026-02-26 14:21:40.050897944 +0000 UTC m=+478.524218467" Feb 26 14:21:41 crc kubenswrapper[4809]: I0226 14:21:41.013399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2jjnr" event={"ID":"e3b1e666-52f7-42ab-bf72-d47a823ab2fd","Type":"ContainerStarted","Data":"eed369cb6d21c754d089b7c4cc0719a28d2491bd8c3038e294c2b6c79252878c"} Feb 26 14:21:41 crc kubenswrapper[4809]: I0226 14:21:41.036617 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2jjnr" podStartSLOduration=2.577808855 podStartE2EDuration="5.036596307s" podCreationTimestamp="2026-02-26 14:21:36 +0000 UTC" firstStartedPulling="2026-02-26 14:21:37.979713137 +0000 UTC m=+476.453033660" lastFinishedPulling="2026-02-26 14:21:40.438500589 +0000 UTC m=+478.911821112" observedRunningTime="2026-02-26 14:21:41.029456831 +0000 UTC m=+479.502777374" watchObservedRunningTime="2026-02-26 14:21:41.036596307 +0000 UTC m=+479.509916850" Feb 26 14:21:41 crc kubenswrapper[4809]: I0226 14:21:41.793580 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:21:41 crc kubenswrapper[4809]: I0226 14:21:41.794151 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.294668 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.294901 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" podUID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" containerName="route-controller-manager" containerID="cri-o://5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21" gracePeriod=30 Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.714231 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.839292 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcg7z\" (UniqueName: \"kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z\") pod \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.839409 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config\") pod \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.839463 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert\") pod \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.839488 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca\") pod \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\" (UID: \"c74fbc4a-e7b2-4475-bdea-1302cc0dae91\") " Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.842213 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca" (OuterVolumeSpecName: "client-ca") pod "c74fbc4a-e7b2-4475-bdea-1302cc0dae91" (UID: "c74fbc4a-e7b2-4475-bdea-1302cc0dae91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.842359 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config" (OuterVolumeSpecName: "config") pod "c74fbc4a-e7b2-4475-bdea-1302cc0dae91" (UID: "c74fbc4a-e7b2-4475-bdea-1302cc0dae91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.845551 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z" (OuterVolumeSpecName: "kube-api-access-kcg7z") pod "c74fbc4a-e7b2-4475-bdea-1302cc0dae91" (UID: "c74fbc4a-e7b2-4475-bdea-1302cc0dae91"). InnerVolumeSpecName "kube-api-access-kcg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.845730 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c74fbc4a-e7b2-4475-bdea-1302cc0dae91" (UID: "c74fbc4a-e7b2-4475-bdea-1302cc0dae91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.940656 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcg7z\" (UniqueName: \"kubernetes.io/projected/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-kube-api-access-kcg7z\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.940695 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.940711 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:42 crc kubenswrapper[4809]: I0226 14:21:42.940728 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c74fbc4a-e7b2-4475-bdea-1302cc0dae91-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.025522 4809 generic.go:334] "Generic (PLEG): container finished" podID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" containerID="5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21" exitCode=0 Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.025574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" event={"ID":"c74fbc4a-e7b2-4475-bdea-1302cc0dae91","Type":"ContainerDied","Data":"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21"} Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.025603 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" event={"ID":"c74fbc4a-e7b2-4475-bdea-1302cc0dae91","Type":"ContainerDied","Data":"31fff4bdac4a2bc0f595bba8153657fbb3232bd9a1cca51eca362d355eaf3288"} Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.025621 4809 scope.go:117] "RemoveContainer" containerID="5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.025581 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.041918 4809 scope.go:117] "RemoveContainer" containerID="5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21" Feb 26 14:21:43 crc kubenswrapper[4809]: E0226 14:21:43.042510 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21\": container with ID starting with 5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21 not found: ID does not exist" containerID="5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.042543 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21"} err="failed to get container status \"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21\": rpc error: code = NotFound desc = could not find container \"5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21\": container with ID starting with 5f852fd030ee37876e67f763e25680798e6d13b8a26a407ec79358db2ab59c21 not found: ID does not exist" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.048556 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.053837 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65d69d64d8-d6r8g"] Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.500885 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.500941 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.548635 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.701989 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw"] Feb 26 14:21:43 crc kubenswrapper[4809]: E0226 14:21:43.702303 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" containerName="route-controller-manager" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.702321 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" containerName="route-controller-manager" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.702452 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" containerName="route-controller-manager" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.702917 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.707963 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.707967 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.708052 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.708095 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.708129 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.708248 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.715149 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw"] Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.874659 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-client-ca\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.874748 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-config\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.874806 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xclzp\" (UniqueName: \"kubernetes.io/projected/81541539-a8ed-415e-aae6-3bb9cb639c08-kube-api-access-xclzp\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.874836 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81541539-a8ed-415e-aae6-3bb9cb639c08-serving-cert\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.976451 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-client-ca\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.976518 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-config\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.976565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xclzp\" (UniqueName: \"kubernetes.io/projected/81541539-a8ed-415e-aae6-3bb9cb639c08-kube-api-access-xclzp\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.976593 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81541539-a8ed-415e-aae6-3bb9cb639c08-serving-cert\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.977643 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-client-ca\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.978991 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81541539-a8ed-415e-aae6-3bb9cb639c08-config\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.982670 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81541539-a8ed-415e-aae6-3bb9cb639c08-serving-cert\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:43 crc kubenswrapper[4809]: I0226 14:21:43.998415 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xclzp\" (UniqueName: \"kubernetes.io/projected/81541539-a8ed-415e-aae6-3bb9cb639c08-kube-api-access-xclzp\") pod \"route-controller-manager-5999566584-zlmhw\" (UID: \"81541539-a8ed-415e-aae6-3bb9cb639c08\") " pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.017703 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.076432 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2mm4b" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.112329 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.112580 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.158864 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.262070 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74fbc4a-e7b2-4475-bdea-1302cc0dae91" path="/var/lib/kubelet/pods/c74fbc4a-e7b2-4475-bdea-1302cc0dae91/volumes" Feb 26 14:21:44 crc kubenswrapper[4809]: I0226 14:21:44.430780 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw"] Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.040968 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" event={"ID":"81541539-a8ed-415e-aae6-3bb9cb639c08","Type":"ContainerStarted","Data":"134fc439d4b4b93a4e9b1143940660618bc5ceae7244553146c4180a5762e16d"} Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.041458 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" event={"ID":"81541539-a8ed-415e-aae6-3bb9cb639c08","Type":"ContainerStarted","Data":"48802cff14a315f15f03321e54c199c2df11b4246d39aad226a5a423b53e5061"} Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.069668 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" podStartSLOduration=3.069651442 podStartE2EDuration="3.069651442s" podCreationTimestamp="2026-02-26 14:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:21:45.068203238 +0000 UTC m=+483.541523761" watchObservedRunningTime="2026-02-26 14:21:45.069651442 +0000 UTC m=+483.542971965" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.073520 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" podUID="911a7065-8744-4237-a986-118263d49bb0" containerName="registry" containerID="cri-o://17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be" gracePeriod=30 Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.094478 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.116587 4809 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-hr5qh container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" start-of-body= Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.116643 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" podUID="911a7065-8744-4237-a986-118263d49bb0" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.426430 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.519827 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.519928 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.519960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.521118 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfh6q\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.521180 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.521209 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.521315 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.521340 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca\") pod \"911a7065-8744-4237-a986-118263d49bb0\" (UID: \"911a7065-8744-4237-a986-118263d49bb0\") " Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.522392 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.522867 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.526574 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.527833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q" (OuterVolumeSpecName: "kube-api-access-cfh6q") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "kube-api-access-cfh6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.532585 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.533648 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.541121 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.544648 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "911a7065-8744-4237-a986-118263d49bb0" (UID: "911a7065-8744-4237-a986-118263d49bb0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622896 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622935 4809 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622951 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/911a7065-8744-4237-a986-118263d49bb0-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622962 4809 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/911a7065-8744-4237-a986-118263d49bb0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622974 4809 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/911a7065-8744-4237-a986-118263d49bb0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622984 4809 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.622996 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfh6q\" (UniqueName: \"kubernetes.io/projected/911a7065-8744-4237-a986-118263d49bb0-kube-api-access-cfh6q\") on node \"crc\" DevicePath \"\"" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.889722 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.889853 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:45 crc kubenswrapper[4809]: I0226 14:21:45.939765 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.047921 4809 generic.go:334] "Generic (PLEG): container finished" podID="911a7065-8744-4237-a986-118263d49bb0" containerID="17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be" exitCode=0 Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.048043 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" event={"ID":"911a7065-8744-4237-a986-118263d49bb0","Type":"ContainerDied","Data":"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be"} Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.048085 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" event={"ID":"911a7065-8744-4237-a986-118263d49bb0","Type":"ContainerDied","Data":"721b4888ab01ec44e19c5a26969536a6b8739ac4e068c10497822a7056c970bf"} Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.048106 4809 scope.go:117] "RemoveContainer" containerID="17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.048495 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.048888 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hr5qh" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.055369 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.069817 4809 scope.go:117] "RemoveContainer" containerID="17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be" Feb 26 14:21:46 crc kubenswrapper[4809]: E0226 14:21:46.070774 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be\": container with ID starting with 17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be not found: ID does not exist" containerID="17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.070824 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be"} err="failed to get container status \"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be\": rpc error: code = NotFound desc = could not find container \"17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be\": container with ID starting with 17184c45b57ff961c70e6a568a872fc56e1f2cab739f623eb193e8b3a282d6be not found: ID does not exist" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.099163 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.099752 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.106299 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hr5qh"] Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.264210 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911a7065-8744-4237-a986-118263d49bb0" path="/var/lib/kubelet/pods/911a7065-8744-4237-a986-118263d49bb0/volumes" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.498470 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.498982 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:46 crc kubenswrapper[4809]: I0226 14:21:46.550829 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:21:47 crc kubenswrapper[4809]: I0226 14:21:47.104810 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2jjnr" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.138303 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535262-x6crs"] Feb 26 14:22:00 crc kubenswrapper[4809]: E0226 14:22:00.139145 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911a7065-8744-4237-a986-118263d49bb0" containerName="registry" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.139160 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="911a7065-8744-4237-a986-118263d49bb0" containerName="registry" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.139278 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="911a7065-8744-4237-a986-118263d49bb0" containerName="registry" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.139736 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.142286 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.142703 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.145678 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.158236 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-x6crs"] Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.205772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtb2x\" (UniqueName: \"kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x\") pod \"auto-csr-approver-29535262-x6crs\" (UID: \"c4664130-dc57-4baa-a6cc-17f864bfbe02\") " pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.306869 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtb2x\" (UniqueName: \"kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x\") pod \"auto-csr-approver-29535262-x6crs\" (UID: \"c4664130-dc57-4baa-a6cc-17f864bfbe02\") " pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.324720 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtb2x\" (UniqueName: \"kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x\") pod \"auto-csr-approver-29535262-x6crs\" (UID: \"c4664130-dc57-4baa-a6cc-17f864bfbe02\") " pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.480504 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:00 crc kubenswrapper[4809]: I0226 14:22:00.892533 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-x6crs"] Feb 26 14:22:02 crc kubenswrapper[4809]: I0226 14:22:01.131244 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-x6crs" event={"ID":"c4664130-dc57-4baa-a6cc-17f864bfbe02","Type":"ContainerStarted","Data":"2e016cd99d0006c2de88d571c351bf8775d9cf9609688eb9739c09af0f276233"} Feb 26 14:22:02 crc kubenswrapper[4809]: I0226 14:22:02.289320 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:22:02 crc kubenswrapper[4809]: I0226 14:22:02.289554 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" podUID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" containerName="controller-manager" containerID="cri-o://442c89a671e4d3e150ad5b16b50a89e76f3d5346300a735347af8997f031d8ff" gracePeriod=30 Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.141981 4809 generic.go:334] "Generic (PLEG): container finished" podID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" containerID="442c89a671e4d3e150ad5b16b50a89e76f3d5346300a735347af8997f031d8ff" exitCode=0 Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.142149 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" event={"ID":"6092a0cc-bdbc-4070-95d5-b23bf3035e7c","Type":"ContainerDied","Data":"442c89a671e4d3e150ad5b16b50a89e76f3d5346300a735347af8997f031d8ff"} Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.455516 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.483324 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p"] Feb 26 14:22:03 crc kubenswrapper[4809]: E0226 14:22:03.483524 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" containerName="controller-manager" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.483535 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" containerName="controller-manager" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.483694 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" containerName="controller-manager" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.485654 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.500164 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p"] Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.652399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config\") pod \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.653283 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config" (OuterVolumeSpecName: "config") pod "6092a0cc-bdbc-4070-95d5-b23bf3035e7c" (UID: "6092a0cc-bdbc-4070-95d5-b23bf3035e7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.653966 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles\") pod \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654090 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbkb\" (UniqueName: \"kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb\") pod \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654137 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca\") pod \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654188 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert\") pod \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\" (UID: \"6092a0cc-bdbc-4070-95d5-b23bf3035e7c\") " Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654333 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-proxy-ca-bundles\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-config\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654419 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-client-ca\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654557 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh487\" (UniqueName: \"kubernetes.io/projected/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-kube-api-access-vh487\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654660 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-serving-cert\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654753 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.654753 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "6092a0cc-bdbc-4070-95d5-b23bf3035e7c" (UID: "6092a0cc-bdbc-4070-95d5-b23bf3035e7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.655035 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6092a0cc-bdbc-4070-95d5-b23bf3035e7c" (UID: "6092a0cc-bdbc-4070-95d5-b23bf3035e7c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.660611 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6092a0cc-bdbc-4070-95d5-b23bf3035e7c" (UID: "6092a0cc-bdbc-4070-95d5-b23bf3035e7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.660745 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb" (OuterVolumeSpecName: "kube-api-access-5zbkb") pod "6092a0cc-bdbc-4070-95d5-b23bf3035e7c" (UID: "6092a0cc-bdbc-4070-95d5-b23bf3035e7c"). InnerVolumeSpecName "kube-api-access-5zbkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.756226 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-proxy-ca-bundles\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.757621 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-config\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.758839 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-client-ca\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.757547 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-proxy-ca-bundles\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.758764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-config\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.758977 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh487\" (UniqueName: \"kubernetes.io/projected/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-kube-api-access-vh487\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.759651 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-client-ca\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.760475 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-serving-cert\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.762102 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.762171 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.762188 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbkb\" (UniqueName: \"kubernetes.io/projected/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-kube-api-access-5zbkb\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.762200 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6092a0cc-bdbc-4070-95d5-b23bf3035e7c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.765417 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-serving-cert\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.780902 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh487\" (UniqueName: \"kubernetes.io/projected/54f5bb56-f353-4d8d-8a61-f0925dc4c25d-kube-api-access-vh487\") pod \"controller-manager-5c78f4f7b8-2rr5p\" (UID: \"54f5bb56-f353-4d8d-8a61-f0925dc4c25d\") " pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:03 crc kubenswrapper[4809]: I0226 14:22:03.810933 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.023860 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p"] Feb 26 14:22:04 crc kubenswrapper[4809]: W0226 14:22:04.037573 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f5bb56_f353_4d8d_8a61_f0925dc4c25d.slice/crio-9960192c2312fc75cb6f5810215a70a88432fc7be005abb966f4a1b42b236645 WatchSource:0}: Error finding container 9960192c2312fc75cb6f5810215a70a88432fc7be005abb966f4a1b42b236645: Status 404 returned error can't find the container with id 9960192c2312fc75cb6f5810215a70a88432fc7be005abb966f4a1b42b236645 Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.151639 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4664130-dc57-4baa-a6cc-17f864bfbe02" containerID="587df762653423efcc2bdf6af91f043e774421cc6bf8d69a9c754eb467608b6f" exitCode=0 Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.152103 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-x6crs" event={"ID":"c4664130-dc57-4baa-a6cc-17f864bfbe02","Type":"ContainerDied","Data":"587df762653423efcc2bdf6af91f043e774421cc6bf8d69a9c754eb467608b6f"} Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.159328 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.159328 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d889f6fd-pjv8r" event={"ID":"6092a0cc-bdbc-4070-95d5-b23bf3035e7c","Type":"ContainerDied","Data":"5ca2b0bd10023003aadb6e8a5d3ad6b4ec8197d8b2f8070c013eddb79a804223"} Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.159389 4809 scope.go:117] "RemoveContainer" containerID="442c89a671e4d3e150ad5b16b50a89e76f3d5346300a735347af8997f031d8ff" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.161305 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" event={"ID":"54f5bb56-f353-4d8d-8a61-f0925dc4c25d","Type":"ContainerStarted","Data":"45de153f9d2f031b3e45b449e41583b1c2496de5097566426bb49928cfaa40e1"} Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.161339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" event={"ID":"54f5bb56-f353-4d8d-8a61-f0925dc4c25d","Type":"ContainerStarted","Data":"9960192c2312fc75cb6f5810215a70a88432fc7be005abb966f4a1b42b236645"} Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.161556 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.163869 4809 patch_prober.go:28] interesting pod/controller-manager-5c78f4f7b8-2rr5p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.78:8443/healthz\": dial tcp 10.217.0.78:8443: connect: connection refused" start-of-body= Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.163911 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podUID="54f5bb56-f353-4d8d-8a61-f0925dc4c25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.78:8443/healthz\": dial tcp 10.217.0.78:8443: connect: connection refused" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.196613 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podStartSLOduration=2.19657609 podStartE2EDuration="2.19657609s" podCreationTimestamp="2026-02-26 14:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:22:04.187162255 +0000 UTC m=+502.660482778" watchObservedRunningTime="2026-02-26 14:22:04.19657609 +0000 UTC m=+502.669896613" Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.209585 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.213261 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77d889f6fd-pjv8r"] Feb 26 14:22:04 crc kubenswrapper[4809]: I0226 14:22:04.264872 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6092a0cc-bdbc-4070-95d5-b23bf3035e7c" path="/var/lib/kubelet/pods/6092a0cc-bdbc-4070-95d5-b23bf3035e7c/volumes" Feb 26 14:22:05 crc kubenswrapper[4809]: I0226 14:22:05.178310 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.195695 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535262-x6crs" event={"ID":"c4664130-dc57-4baa-a6cc-17f864bfbe02","Type":"ContainerDied","Data":"2e016cd99d0006c2de88d571c351bf8775d9cf9609688eb9739c09af0f276233"} Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.195791 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e016cd99d0006c2de88d571c351bf8775d9cf9609688eb9739c09af0f276233" Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.225472 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.294399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtb2x\" (UniqueName: \"kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x\") pod \"c4664130-dc57-4baa-a6cc-17f864bfbe02\" (UID: \"c4664130-dc57-4baa-a6cc-17f864bfbe02\") " Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.301465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x" (OuterVolumeSpecName: "kube-api-access-mtb2x") pod "c4664130-dc57-4baa-a6cc-17f864bfbe02" (UID: "c4664130-dc57-4baa-a6cc-17f864bfbe02"). InnerVolumeSpecName "kube-api-access-mtb2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:22:06 crc kubenswrapper[4809]: I0226 14:22:06.395551 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtb2x\" (UniqueName: \"kubernetes.io/projected/c4664130-dc57-4baa-a6cc-17f864bfbe02-kube-api-access-mtb2x\") on node \"crc\" DevicePath \"\"" Feb 26 14:22:07 crc kubenswrapper[4809]: I0226 14:22:07.201695 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535262-x6crs" Feb 26 14:22:07 crc kubenswrapper[4809]: I0226 14:22:07.276489 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-qfv5b"] Feb 26 14:22:07 crc kubenswrapper[4809]: I0226 14:22:07.279650 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535256-qfv5b"] Feb 26 14:22:08 crc kubenswrapper[4809]: I0226 14:22:08.263170 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4611e2a1-2842-4901-b49b-126b928b38f1" path="/var/lib/kubelet/pods/4611e2a1-2842-4901-b49b-126b928b38f1/volumes" Feb 26 14:22:11 crc kubenswrapper[4809]: I0226 14:22:11.793793 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:22:11 crc kubenswrapper[4809]: I0226 14:22:11.794206 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:22:11 crc kubenswrapper[4809]: I0226 14:22:11.794265 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:22:11 crc kubenswrapper[4809]: I0226 14:22:11.794941 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:22:11 crc kubenswrapper[4809]: I0226 14:22:11.795033 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed" gracePeriod=600 Feb 26 14:22:12 crc kubenswrapper[4809]: I0226 14:22:12.237291 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed" exitCode=0 Feb 26 14:22:12 crc kubenswrapper[4809]: I0226 14:22:12.237342 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed"} Feb 26 14:22:12 crc kubenswrapper[4809]: I0226 14:22:12.237383 4809 scope.go:117] "RemoveContainer" containerID="b336666e4275b5835bbdf61154e678decd2cd8936da74581885bf399b0767e02" Feb 26 14:22:13 crc kubenswrapper[4809]: I0226 14:22:13.244411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00"} Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.303443 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp"] Feb 26 14:22:24 crc kubenswrapper[4809]: E0226 14:22:24.304586 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4664130-dc57-4baa-a6cc-17f864bfbe02" containerName="oc" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.304608 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4664130-dc57-4baa-a6cc-17f864bfbe02" containerName="oc" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.304775 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4664130-dc57-4baa-a6cc-17f864bfbe02" containerName="oc" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.305573 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.307614 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.307866 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.308155 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.308336 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.308806 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.322777 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp"] Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.443721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/0388aaa1-7984-471c-8f59-cddd09611146-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.444226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/0388aaa1-7984-471c-8f59-cddd09611146-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.444394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cncn\" (UniqueName: \"kubernetes.io/projected/0388aaa1-7984-471c-8f59-cddd09611146-kube-api-access-9cncn\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.545075 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/0388aaa1-7984-471c-8f59-cddd09611146-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.545140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cncn\" (UniqueName: \"kubernetes.io/projected/0388aaa1-7984-471c-8f59-cddd09611146-kube-api-access-9cncn\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.545162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/0388aaa1-7984-471c-8f59-cddd09611146-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.546305 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/0388aaa1-7984-471c-8f59-cddd09611146-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.554075 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/0388aaa1-7984-471c-8f59-cddd09611146-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.575575 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cncn\" (UniqueName: \"kubernetes.io/projected/0388aaa1-7984-471c-8f59-cddd09611146-kube-api-access-9cncn\") pod \"cluster-monitoring-operator-6d5b84845-mspwp\" (UID: \"0388aaa1-7984-471c-8f59-cddd09611146\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:24 crc kubenswrapper[4809]: I0226 14:22:24.626282 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" Feb 26 14:22:25 crc kubenswrapper[4809]: I0226 14:22:25.076723 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp"] Feb 26 14:22:25 crc kubenswrapper[4809]: I0226 14:22:25.323584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" event={"ID":"0388aaa1-7984-471c-8f59-cddd09611146","Type":"ContainerStarted","Data":"3c66927a56bc47e18943031f022c16202b6400cd5f8b2f729a40d0a989e90128"} Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.337361 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" event={"ID":"0388aaa1-7984-471c-8f59-cddd09611146","Type":"ContainerStarted","Data":"e1b3deb0e42bf69ad6f4e85c2e946f19e690c8038127900f14933f049b20f0cb"} Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.384885 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mspwp" podStartSLOduration=1.6483043579999999 podStartE2EDuration="3.384855273s" podCreationTimestamp="2026-02-26 14:22:24 +0000 UTC" firstStartedPulling="2026-02-26 14:22:25.091962099 +0000 UTC m=+523.565282622" lastFinishedPulling="2026-02-26 14:22:26.828513014 +0000 UTC m=+525.301833537" observedRunningTime="2026-02-26 14:22:27.354645505 +0000 UTC m=+525.827966038" watchObservedRunningTime="2026-02-26 14:22:27.384855273 +0000 UTC m=+525.858175796" Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.387504 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4"] Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.388264 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:27 crc kubenswrapper[4809]: W0226 14:22:27.389738 4809 reflector.go:561] object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-ffdwq": failed to list *v1.Secret: secrets "prometheus-operator-admission-webhook-dockercfg-ffdwq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Feb 26 14:22:27 crc kubenswrapper[4809]: E0226 14:22:27.389773 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-ffdwq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"prometheus-operator-admission-webhook-dockercfg-ffdwq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 14:22:27 crc kubenswrapper[4809]: W0226 14:22:27.391128 4809 reflector.go:561] object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls": failed to list *v1.Secret: secrets "prometheus-operator-admission-webhook-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Feb 26 14:22:27 crc kubenswrapper[4809]: E0226 14:22:27.391180 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"prometheus-operator-admission-webhook-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.404510 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4"] Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.410455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hdkn4\" (UID: \"9a937049-c4e1-499a-b3eb-6622e14cf7f5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:27 crc kubenswrapper[4809]: I0226 14:22:27.511201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hdkn4\" (UID: \"9a937049-c4e1-499a-b3eb-6622e14cf7f5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:28 crc kubenswrapper[4809]: E0226 14:22:28.512219 4809 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 14:22:28 crc kubenswrapper[4809]: E0226 14:22:28.512377 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates podName:9a937049-c4e1-499a-b3eb-6622e14cf7f5 nodeName:}" failed. No retries permitted until 2026-02-26 14:22:29.012345137 +0000 UTC m=+527.485665660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-hdkn4" (UID: "9a937049-c4e1-499a-b3eb-6622e14cf7f5") : failed to sync secret cache: timed out waiting for the condition Feb 26 14:22:28 crc kubenswrapper[4809]: I0226 14:22:28.514203 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-ffdwq" Feb 26 14:22:28 crc kubenswrapper[4809]: I0226 14:22:28.578609 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 26 14:22:29 crc kubenswrapper[4809]: I0226 14:22:29.032580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hdkn4\" (UID: \"9a937049-c4e1-499a-b3eb-6622e14cf7f5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:29 crc kubenswrapper[4809]: I0226 14:22:29.045810 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9a937049-c4e1-499a-b3eb-6622e14cf7f5-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-hdkn4\" (UID: \"9a937049-c4e1-499a-b3eb-6622e14cf7f5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:29 crc kubenswrapper[4809]: I0226 14:22:29.204938 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:29 crc kubenswrapper[4809]: I0226 14:22:29.617333 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4"] Feb 26 14:22:30 crc kubenswrapper[4809]: I0226 14:22:30.357738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" event={"ID":"9a937049-c4e1-499a-b3eb-6622e14cf7f5","Type":"ContainerStarted","Data":"c60765f59fef7f19549796759fb7dd3e22ce0d4b018d4e0e8717f9cbae985e64"} Feb 26 14:22:31 crc kubenswrapper[4809]: I0226 14:22:31.369570 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" event={"ID":"9a937049-c4e1-499a-b3eb-6622e14cf7f5","Type":"ContainerStarted","Data":"13b5bdbf02968f1e31fdec285a37075b5f0919f38ee608801c2cb4be8b4cdb90"} Feb 26 14:22:31 crc kubenswrapper[4809]: I0226 14:22:31.369932 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:31 crc kubenswrapper[4809]: I0226 14:22:31.377089 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 14:22:31 crc kubenswrapper[4809]: I0226 14:22:31.387490 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podStartSLOduration=3.228177818 podStartE2EDuration="4.387469495s" podCreationTimestamp="2026-02-26 14:22:27 +0000 UTC" firstStartedPulling="2026-02-26 14:22:29.62800545 +0000 UTC m=+528.101326013" lastFinishedPulling="2026-02-26 14:22:30.787297167 +0000 UTC m=+529.260617690" observedRunningTime="2026-02-26 14:22:31.382276076 +0000 UTC m=+529.855596609" watchObservedRunningTime="2026-02-26 14:22:31.387469495 +0000 UTC m=+529.860790018" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.427218 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-p8vdx"] Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.428308 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.430749 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.431065 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.436900 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pmvtb" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.437590 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.440287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-p8vdx"] Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.471156 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef6896f1-6f41-4217-9cb3-5f748e620613-metrics-client-ca\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.471230 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.471293 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.471347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69tg\" (UniqueName: \"kubernetes.io/projected/ef6896f1-6f41-4217-9cb3-5f748e620613-kube-api-access-r69tg\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.572366 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef6896f1-6f41-4217-9cb3-5f748e620613-metrics-client-ca\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.572777 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.572974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.573276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r69tg\" (UniqueName: \"kubernetes.io/projected/ef6896f1-6f41-4217-9cb3-5f748e620613-kube-api-access-r69tg\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.573914 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef6896f1-6f41-4217-9cb3-5f748e620613-metrics-client-ca\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.581372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.589215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef6896f1-6f41-4217-9cb3-5f748e620613-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.593054 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r69tg\" (UniqueName: \"kubernetes.io/projected/ef6896f1-6f41-4217-9cb3-5f748e620613-kube-api-access-r69tg\") pod \"prometheus-operator-db54df47d-p8vdx\" (UID: \"ef6896f1-6f41-4217-9cb3-5f748e620613\") " pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:32 crc kubenswrapper[4809]: I0226 14:22:32.743164 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" Feb 26 14:22:33 crc kubenswrapper[4809]: I0226 14:22:33.188606 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-p8vdx"] Feb 26 14:22:33 crc kubenswrapper[4809]: I0226 14:22:33.380600 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" event={"ID":"ef6896f1-6f41-4217-9cb3-5f748e620613","Type":"ContainerStarted","Data":"ad3ccceb2aa55b6ead9fd8290dd4372862371bc3b0fbee1482ee43cdc124a460"} Feb 26 14:22:35 crc kubenswrapper[4809]: I0226 14:22:35.393810 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" event={"ID":"ef6896f1-6f41-4217-9cb3-5f748e620613","Type":"ContainerStarted","Data":"ea299f7cb097062e11f3ac0ed64c3e8d6f28e04e8207f91638bde680f3b4f9d2"} Feb 26 14:22:35 crc kubenswrapper[4809]: I0226 14:22:35.394250 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" event={"ID":"ef6896f1-6f41-4217-9cb3-5f748e620613","Type":"ContainerStarted","Data":"6b30444c972fe83370182bf4f359c7a4b2016c8d5266f80984f0c1a08215bbd5"} Feb 26 14:22:35 crc kubenswrapper[4809]: I0226 14:22:35.418618 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-p8vdx" podStartSLOduration=1.9859022670000002 podStartE2EDuration="3.418592166s" podCreationTimestamp="2026-02-26 14:22:32 +0000 UTC" firstStartedPulling="2026-02-26 14:22:33.202348242 +0000 UTC m=+531.675668775" lastFinishedPulling="2026-02-26 14:22:34.635038151 +0000 UTC m=+533.108358674" observedRunningTime="2026-02-26 14:22:35.414644632 +0000 UTC m=+533.887965195" watchObservedRunningTime="2026-02-26 14:22:35.418592166 +0000 UTC m=+533.891912709" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.770700 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-zr78f"] Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.771998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.776307 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n"] Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.777103 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-84f2j" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.777279 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.779792 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.779990 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.780299 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.780402 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.785090 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-zr78f"] Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.785213 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.785424 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-j2mp8" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.798223 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-k8j7c"] Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.799175 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.815191 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.815248 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.815366 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-dr8qc" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.838417 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n"] Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949932 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949951 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-wtmp\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.949999 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-textfile\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950036 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svcp\" (UniqueName: \"kubernetes.io/projected/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-kube-api-access-2svcp\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/98ab39a3-50c9-416d-a384-329b460f8e80-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950070 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950086 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprjg\" (UniqueName: \"kubernetes.io/projected/98ab39a3-50c9-416d-a384-329b460f8e80-kube-api-access-lprjg\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950105 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950126 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950142 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-sys\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950199 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-root\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950214 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbtm\" (UniqueName: \"kubernetes.io/projected/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-kube-api-access-7pbtm\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:37 crc kubenswrapper[4809]: I0226 14:22:37.950232 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-metrics-client-ca\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lprjg\" (UniqueName: \"kubernetes.io/projected/98ab39a3-50c9-416d-a384-329b460f8e80-kube-api-access-lprjg\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051646 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051670 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051688 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051705 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-sys\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051737 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051761 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-root\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051777 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pbtm\" (UniqueName: \"kubernetes.io/projected/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-kube-api-access-7pbtm\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-metrics-client-ca\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051835 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051856 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051873 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-wtmp\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051897 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051918 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-textfile\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svcp\" (UniqueName: \"kubernetes.io/projected/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-kube-api-access-2svcp\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.051959 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/98ab39a3-50c9-416d-a384-329b460f8e80-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.052424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/98ab39a3-50c9-416d-a384-329b460f8e80-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.053563 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.054176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-root\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.054179 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-sys\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: E0226 14:22:38.054407 4809 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 26 14:22:38 crc kubenswrapper[4809]: E0226 14:22:38.054501 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls podName:9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8 nodeName:}" failed. No retries permitted until 2026-02-26 14:22:38.554479933 +0000 UTC m=+537.027800456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls") pod "node-exporter-k8j7c" (UID: "9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8") : secret "node-exporter-tls" not found Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.054681 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-textfile\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.054863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-wtmp\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.054953 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.055097 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-metrics-client-ca\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.055450 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.059719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.063553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.065513 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.065707 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.071501 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/98ab39a3-50c9-416d-a384-329b460f8e80-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.076565 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svcp\" (UniqueName: \"kubernetes.io/projected/8f9ff77d-2aae-4c19-8e9a-ffff460c41bf-kube-api-access-2svcp\") pod \"openshift-state-metrics-566fddb674-zr78f\" (UID: \"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.078640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pbtm\" (UniqueName: \"kubernetes.io/projected/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-kube-api-access-7pbtm\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.078774 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprjg\" (UniqueName: \"kubernetes.io/projected/98ab39a3-50c9-416d-a384-329b460f8e80-kube-api-access-lprjg\") pod \"kube-state-metrics-777cb5bd5d-jsk9n\" (UID: \"98ab39a3-50c9-416d-a384-329b460f8e80\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.092720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.100355 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.503322 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-zr78f"] Feb 26 14:22:38 crc kubenswrapper[4809]: W0226 14:22:38.511385 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f9ff77d_2aae_4c19_8e9a_ffff460c41bf.slice/crio-f1548896396d68cc02dea51c182ce5e456648b6708bd73193763518ab37bfa0b WatchSource:0}: Error finding container f1548896396d68cc02dea51c182ce5e456648b6708bd73193763518ab37bfa0b: Status 404 returned error can't find the container with id f1548896396d68cc02dea51c182ce5e456648b6708bd73193763518ab37bfa0b Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.559106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.563482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8-node-exporter-tls\") pod \"node-exporter-k8j7c\" (UID: \"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8\") " pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.569747 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n"] Feb 26 14:22:38 crc kubenswrapper[4809]: W0226 14:22:38.577430 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ab39a3_50c9_416d_a384_329b460f8e80.slice/crio-cd2fac6e8f9eb324b625e4f396bd92db7eb3f4c020e14b410ea097514e4ccd51 WatchSource:0}: Error finding container cd2fac6e8f9eb324b625e4f396bd92db7eb3f4c020e14b410ea097514e4ccd51: Status 404 returned error can't find the container with id cd2fac6e8f9eb324b625e4f396bd92db7eb3f4c020e14b410ea097514e4ccd51 Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.580302 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.722347 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-k8j7c" Feb 26 14:22:38 crc kubenswrapper[4809]: W0226 14:22:38.744288 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bf5d71f_ffa5_437b_8cd2_fd9b8c15a8c8.slice/crio-1cd394b29b8f060b4857e076f12ef8ada9b2e672e15d2a52b6e2a69ab251cf3a WatchSource:0}: Error finding container 1cd394b29b8f060b4857e076f12ef8ada9b2e672e15d2a52b6e2a69ab251cf3a: Status 404 returned error can't find the container with id 1cd394b29b8f060b4857e076f12ef8ada9b2e672e15d2a52b6e2a69ab251cf3a Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.864940 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.866958 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.868785 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.869038 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.869161 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.869262 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.872298 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4nxg2" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.872467 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.872603 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.873144 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.873958 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.888159 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966428 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966505 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-volume\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966527 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpfz\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-kube-api-access-8xpfz\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.966962 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.967030 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-tls-assets\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.967062 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-web-config\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.967084 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-out\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.967100 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:38 crc kubenswrapper[4809]: I0226 14:22:38.967121 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.068586 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-tls-assets\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.068636 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-web-config\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.068664 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-out\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.068688 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.068720 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069492 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069545 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069581 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-volume\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069602 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069652 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpfz\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-kube-api-access-8xpfz\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069722 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.069778 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.073910 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-tls-assets\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.073934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.074359 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-web-config\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.074427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.074707 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.074872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.075783 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.077074 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-volume\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.078452 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-config-out\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.094042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpfz\" (UniqueName: \"kubernetes.io/projected/29da76bc-c43e-4b4b-b3df-05ca2015e1e9-kube-api-access-8xpfz\") pod \"alertmanager-main-0\" (UID: \"29da76bc-c43e-4b4b-b3df-05ca2015e1e9\") " pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.181354 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.424566 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" event={"ID":"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf","Type":"ContainerStarted","Data":"7a7b4f2dbf924a2920901823f2e69ed6dfba086682a0b2dfd02f95fad95f5bca"} Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.424916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" event={"ID":"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf","Type":"ContainerStarted","Data":"1feb295d1ad4a4b26e6cad7d7c5409acd8a4cbc3b41dc2e38ec78a9f59c34631"} Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.424931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" event={"ID":"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf","Type":"ContainerStarted","Data":"f1548896396d68cc02dea51c182ce5e456648b6708bd73193763518ab37bfa0b"} Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.426224 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k8j7c" event={"ID":"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8","Type":"ContainerStarted","Data":"1cd394b29b8f060b4857e076f12ef8ada9b2e672e15d2a52b6e2a69ab251cf3a"} Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.427352 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" event={"ID":"98ab39a3-50c9-416d-a384-329b460f8e80","Type":"ContainerStarted","Data":"cd2fac6e8f9eb324b625e4f396bd92db7eb3f4c020e14b410ea097514e4ccd51"} Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.608207 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 26 14:22:39 crc kubenswrapper[4809]: W0226 14:22:39.826170 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29da76bc_c43e_4b4b_b3df_05ca2015e1e9.slice/crio-6f0ade02d6c15b5ee0a5f9dd0139867549f007ae6aa984abe36e03f82435d58f WatchSource:0}: Error finding container 6f0ade02d6c15b5ee0a5f9dd0139867549f007ae6aa984abe36e03f82435d58f: Status 404 returned error can't find the container with id 6f0ade02d6c15b5ee0a5f9dd0139867549f007ae6aa984abe36e03f82435d58f Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.856223 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-598477c4d-v2nsv"] Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.858000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866494 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8m0ns8aimpt6f" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866542 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866665 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866687 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866694 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.866874 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4rk8p" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.867490 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.871271 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-598477c4d-v2nsv"] Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.987954 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988080 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdzv\" (UniqueName: \"kubernetes.io/projected/b837cadb-b512-4a4a-ae50-0b8729bd351a-kube-api-access-ssdzv\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-grpc-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988919 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b837cadb-b512-4a4a-ae50-0b8729bd351a-metrics-client-ca\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:39 crc kubenswrapper[4809]: I0226 14:22:39.988968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090605 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090641 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090689 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdzv\" (UniqueName: \"kubernetes.io/projected/b837cadb-b512-4a4a-ae50-0b8729bd351a-kube-api-access-ssdzv\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090740 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090761 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-grpc-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090791 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b837cadb-b512-4a4a-ae50-0b8729bd351a-metrics-client-ca\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.090814 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.092115 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b837cadb-b512-4a4a-ae50-0b8729bd351a-metrics-client-ca\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.095835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.097718 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.098991 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.100265 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.103398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-grpc-tls\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.104719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b837cadb-b512-4a4a-ae50-0b8729bd351a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.109370 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdzv\" (UniqueName: \"kubernetes.io/projected/b837cadb-b512-4a4a-ae50-0b8729bd351a-kube-api-access-ssdzv\") pod \"thanos-querier-598477c4d-v2nsv\" (UID: \"b837cadb-b512-4a4a-ae50-0b8729bd351a\") " pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.177265 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.434557 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"6f0ade02d6c15b5ee0a5f9dd0139867549f007ae6aa984abe36e03f82435d58f"} Feb 26 14:22:40 crc kubenswrapper[4809]: I0226 14:22:40.942674 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-598477c4d-v2nsv"] Feb 26 14:22:40 crc kubenswrapper[4809]: W0226 14:22:40.953644 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb837cadb_b512_4a4a_ae50_0b8729bd351a.slice/crio-02745acce3dad9842dea80d87cd3f7fab7a8c2cd4b23c3dd4e19f669a1a1dc65 WatchSource:0}: Error finding container 02745acce3dad9842dea80d87cd3f7fab7a8c2cd4b23c3dd4e19f669a1a1dc65: Status 404 returned error can't find the container with id 02745acce3dad9842dea80d87cd3f7fab7a8c2cd4b23c3dd4e19f669a1a1dc65 Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.442498 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" event={"ID":"8f9ff77d-2aae-4c19-8e9a-ffff460c41bf","Type":"ContainerStarted","Data":"cac78fab520ee154279cc9595804fc7d7d15659656f5010c11c7106e7e49276b"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.444821 4809 generic.go:334] "Generic (PLEG): container finished" podID="9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8" containerID="736301dc8d78bc2c0ce2108e110977c5a2f21c0a887c0ef14955a590bc065892" exitCode=0 Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.444870 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k8j7c" event={"ID":"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8","Type":"ContainerDied","Data":"736301dc8d78bc2c0ce2108e110977c5a2f21c0a887c0ef14955a590bc065892"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.447441 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" event={"ID":"98ab39a3-50c9-416d-a384-329b460f8e80","Type":"ContainerStarted","Data":"141508a0300fb23c20aef39843011110356a7936f78715bdf5601562e3bfb9ea"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.447473 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" event={"ID":"98ab39a3-50c9-416d-a384-329b460f8e80","Type":"ContainerStarted","Data":"ec81139efc1f2d345e6c6f78ec4fcfc4181f2021d3e57e7cdc0167368abfefe7"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.447488 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" event={"ID":"98ab39a3-50c9-416d-a384-329b460f8e80","Type":"ContainerStarted","Data":"16ec43e930ac817cc3a83bca391dabd92ec25cfd20ccffeb818ed76c3bf890c3"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.448262 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"02745acce3dad9842dea80d87cd3f7fab7a8c2cd4b23c3dd4e19f669a1a1dc65"} Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.463737 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-zr78f" podStartSLOduration=2.719375962 podStartE2EDuration="4.463723112s" podCreationTimestamp="2026-02-26 14:22:37 +0000 UTC" firstStartedPulling="2026-02-26 14:22:38.769616223 +0000 UTC m=+537.242936756" lastFinishedPulling="2026-02-26 14:22:40.513963383 +0000 UTC m=+538.987283906" observedRunningTime="2026-02-26 14:22:41.458538203 +0000 UTC m=+539.931858726" watchObservedRunningTime="2026-02-26 14:22:41.463723112 +0000 UTC m=+539.937043625" Feb 26 14:22:41 crc kubenswrapper[4809]: I0226 14:22:41.485321 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-jsk9n" podStartSLOduration=2.540911397 podStartE2EDuration="4.485302372s" podCreationTimestamp="2026-02-26 14:22:37 +0000 UTC" firstStartedPulling="2026-02-26 14:22:38.580076289 +0000 UTC m=+537.053396812" lastFinishedPulling="2026-02-26 14:22:40.524467264 +0000 UTC m=+538.997787787" observedRunningTime="2026-02-26 14:22:41.484242971 +0000 UTC m=+539.957563494" watchObservedRunningTime="2026-02-26 14:22:41.485302372 +0000 UTC m=+539.958622895" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.455258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerDied","Data":"14a9f29bbd2c93eeecffc4a254dca519cc68fc2b3d89cf13f221f9f1f661e638"} Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.455850 4809 generic.go:334] "Generic (PLEG): container finished" podID="29da76bc-c43e-4b4b-b3df-05ca2015e1e9" containerID="14a9f29bbd2c93eeecffc4a254dca519cc68fc2b3d89cf13f221f9f1f661e638" exitCode=0 Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.460797 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k8j7c" event={"ID":"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8","Type":"ContainerStarted","Data":"52c5051315f4034786977dc58e6145176979d93fbb5e5392367d92a68083b41e"} Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.460839 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k8j7c" event={"ID":"9bf5d71f-ffa5-437b-8cd2-fd9b8c15a8c8","Type":"ContainerStarted","Data":"c9c37cae04c0872f01fa28323e249e30b47d25504cbbe31778095d80aa64fc1f"} Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.516928 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-k8j7c" podStartSLOduration=3.753569246 podStartE2EDuration="5.516907491s" podCreationTimestamp="2026-02-26 14:22:37 +0000 UTC" firstStartedPulling="2026-02-26 14:22:38.75210095 +0000 UTC m=+537.225421483" lastFinishedPulling="2026-02-26 14:22:40.515439195 +0000 UTC m=+538.988759728" observedRunningTime="2026-02-26 14:22:42.511719992 +0000 UTC m=+540.985040525" watchObservedRunningTime="2026-02-26 14:22:42.516907491 +0000 UTC m=+540.990228014" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.606501 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.607411 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.626280 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632495 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632682 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632759 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632872 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.632995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29cn\" (UniqueName: \"kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734290 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734313 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29cn\" (UniqueName: \"kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734390 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734414 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.734434 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.736776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.736848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.737393 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.738081 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.745801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.746731 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.754965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29cn\" (UniqueName: \"kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn\") pod \"console-585f6845b6-lfxrr\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:42 crc kubenswrapper[4809]: I0226 14:22:42.927799 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.084129 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-57b6f675c4-zbdkg"] Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.085704 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.091793 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57b6f675c4-zbdkg"] Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092228 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092303 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092483 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092592 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-43dpao14ev4op" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092665 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.092777 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-4pk9d" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145022 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5h\" (UniqueName: \"kubernetes.io/projected/4b6ea043-8b1b-45ed-8ac8-422d673444f8-kube-api-access-gfd5h\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145076 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-client-certs\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145125 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-client-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145224 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b6ea043-8b1b-45ed-8ac8-422d673444f8-audit-log\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145259 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-server-tls\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.145291 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-metrics-server-audit-profiles\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-server-tls\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-metrics-server-audit-profiles\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246368 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfd5h\" (UniqueName: \"kubernetes.io/projected/4b6ea043-8b1b-45ed-8ac8-422d673444f8-kube-api-access-gfd5h\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246400 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-client-certs\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246436 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-client-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.246505 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b6ea043-8b1b-45ed-8ac8-422d673444f8-audit-log\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.247345 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4b6ea043-8b1b-45ed-8ac8-422d673444f8-audit-log\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.247881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.249240 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4b6ea043-8b1b-45ed-8ac8-422d673444f8-metrics-server-audit-profiles\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.253338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-server-tls\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.254533 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-client-ca-bundle\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.255297 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b6ea043-8b1b-45ed-8ac8-422d673444f8-secret-metrics-client-certs\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.267744 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfd5h\" (UniqueName: \"kubernetes.io/projected/4b6ea043-8b1b-45ed-8ac8-422d673444f8-kube-api-access-gfd5h\") pod \"metrics-server-57b6f675c4-zbdkg\" (UID: \"4b6ea043-8b1b-45ed-8ac8-422d673444f8\") " pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.407664 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.463105 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.468271 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"62df2eac8ff6e447192eb8d131fb35477bfe9435dcefca5a32c234e735c86c97"} Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.468320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"fc0dd06df0775bc34b988e1ba9e3a0ac8a938dcaa0dcaf8e192f43e62d7217c1"} Feb 26 14:22:43 crc kubenswrapper[4809]: W0226 14:22:43.469898 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec59d446_ee11_4fee_b752_b6f6c2de1da7.slice/crio-80a144bb052f12df29c78dd4f41402bed851eebdc1f9f7935dbaac037542e398 WatchSource:0}: Error finding container 80a144bb052f12df29c78dd4f41402bed851eebdc1f9f7935dbaac037542e398: Status 404 returned error can't find the container with id 80a144bb052f12df29c78dd4f41402bed851eebdc1f9f7935dbaac037542e398 Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.576395 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt"] Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.577616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.579992 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.580111 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.595492 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt"] Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.651452 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a2ca0fbe-1b6a-489f-909a-589efde40622-monitoring-plugin-cert\") pod \"monitoring-plugin-7df6d976f7-8dzjt\" (UID: \"a2ca0fbe-1b6a-489f-909a-589efde40622\") " pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.752736 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a2ca0fbe-1b6a-489f-909a-589efde40622-monitoring-plugin-cert\") pod \"monitoring-plugin-7df6d976f7-8dzjt\" (UID: \"a2ca0fbe-1b6a-489f-909a-589efde40622\") " pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.760801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a2ca0fbe-1b6a-489f-909a-589efde40622-monitoring-plugin-cert\") pod \"monitoring-plugin-7df6d976f7-8dzjt\" (UID: \"a2ca0fbe-1b6a-489f-909a-589efde40622\") " pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.859557 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57b6f675c4-zbdkg"] Feb 26 14:22:43 crc kubenswrapper[4809]: W0226 14:22:43.866270 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b6ea043_8b1b_45ed_8ac8_422d673444f8.slice/crio-24fa3e7878fd9ccd73e7baab551d8faad1ef8ca363822ad2526e9ff78f40f41c WatchSource:0}: Error finding container 24fa3e7878fd9ccd73e7baab551d8faad1ef8ca363822ad2526e9ff78f40f41c: Status 404 returned error can't find the container with id 24fa3e7878fd9ccd73e7baab551d8faad1ef8ca363822ad2526e9ff78f40f41c Feb 26 14:22:43 crc kubenswrapper[4809]: I0226 14:22:43.899480 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.178266 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.181122 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.185602 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.185682 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.185626 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.185887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.186092 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.186259 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.186400 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-plbhn" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.186576 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-d9ko8g6b4i39c" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.190055 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.190509 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.191242 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.221551 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.222069 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.228974 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.258946 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259055 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-web-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259087 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259110 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259169 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr6g\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-kube-api-access-qcr6g\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259290 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259387 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259440 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259463 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-config-out\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259600 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.259656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.336584 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt"] Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361024 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcr6g\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-kube-api-access-qcr6g\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361093 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361118 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361148 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361223 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361254 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-config-out\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361336 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361368 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361397 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361497 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361599 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361626 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-web-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361652 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.361675 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.363414 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.363475 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.363611 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.364975 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.366692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d80d3c46-edff-47e9-98e5-357fbc27f114-config-out\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.367406 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.367547 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.368847 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.369394 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.369824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d80d3c46-edff-47e9-98e5-357fbc27f114-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.370065 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.370356 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.372960 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-web-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.373635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.373790 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.373964 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-config\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.376614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d80d3c46-edff-47e9-98e5-357fbc27f114-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.384769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcr6g\" (UniqueName: \"kubernetes.io/projected/d80d3c46-edff-47e9-98e5-357fbc27f114-kube-api-access-qcr6g\") pod \"prometheus-k8s-0\" (UID: \"d80d3c46-edff-47e9-98e5-357fbc27f114\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.480777 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-585f6845b6-lfxrr" event={"ID":"ec59d446-ee11-4fee-b752-b6f6c2de1da7","Type":"ContainerStarted","Data":"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47"} Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.480835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-585f6845b6-lfxrr" event={"ID":"ec59d446-ee11-4fee-b752-b6f6c2de1da7","Type":"ContainerStarted","Data":"80a144bb052f12df29c78dd4f41402bed851eebdc1f9f7935dbaac037542e398"} Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.485818 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"e67721cb59933321708dc92bf72f722002ac3820364b04155db2614ad827f54b"} Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.488223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" event={"ID":"4b6ea043-8b1b-45ed-8ac8-422d673444f8","Type":"ContainerStarted","Data":"24fa3e7878fd9ccd73e7baab551d8faad1ef8ca363822ad2526e9ff78f40f41c"} Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.514354 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-585f6845b6-lfxrr" podStartSLOduration=2.51433424 podStartE2EDuration="2.51433424s" podCreationTimestamp="2026-02-26 14:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:22:44.506264059 +0000 UTC m=+542.979584582" watchObservedRunningTime="2026-02-26 14:22:44.51433424 +0000 UTC m=+542.987654763" Feb 26 14:22:44 crc kubenswrapper[4809]: I0226 14:22:44.526423 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:22:45 crc kubenswrapper[4809]: I0226 14:22:45.495097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" event={"ID":"a2ca0fbe-1b6a-489f-909a-589efde40622","Type":"ContainerStarted","Data":"a2fda13cec7162574c818fdffa02f29f36c39ec185023e6a7a2bc93192a411b3"} Feb 26 14:22:45 crc kubenswrapper[4809]: I0226 14:22:45.953388 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 26 14:22:46 crc kubenswrapper[4809]: I0226 14:22:46.507756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"8c3222759d776252b6177a94f8b858ea55b528ec2de6ecc0a92032da1b3db60d"} Feb 26 14:22:46 crc kubenswrapper[4809]: I0226 14:22:46.508226 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"b137248b749a21bc5ebaa1dddb3582d8cadb009ab905199a2ac8b67ae1842001"} Feb 26 14:22:46 crc kubenswrapper[4809]: I0226 14:22:46.511322 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"5fa2dd3a71d2073f58b8954e6a1a6713122feee78c78d6dd20432c7fcaffa6cf"} Feb 26 14:22:46 crc kubenswrapper[4809]: I0226 14:22:46.511363 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"2d51c536f896ec2d2e0106345f5fa81865f1d70783153b1ccbed762e0ab28a2d"} Feb 26 14:22:46 crc kubenswrapper[4809]: W0226 14:22:46.549533 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80d3c46_edff_47e9_98e5_357fbc27f114.slice/crio-aecd6adb72484faf38e4ca2dcdfc121a029183eddbba8c40207f37b571654120 WatchSource:0}: Error finding container aecd6adb72484faf38e4ca2dcdfc121a029183eddbba8c40207f37b571654120: Status 404 returned error can't find the container with id aecd6adb72484faf38e4ca2dcdfc121a029183eddbba8c40207f37b571654120 Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.524377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" event={"ID":"b837cadb-b512-4a4a-ae50-0b8729bd351a","Type":"ContainerStarted","Data":"99246267d14a863a18312291d7d806b3801d7a2829650da04561029d60564a51"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.526584 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.531989 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" event={"ID":"a2ca0fbe-1b6a-489f-909a-589efde40622","Type":"ContainerStarted","Data":"88129fa023e5f7ae0c27b37683bb1162a1cc4f1c0f39f2b4cfc32e226a202fb8"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.533114 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.542364 4809 generic.go:334] "Generic (PLEG): container finished" podID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerID="df3e4a36191913fcc5399342650eca57c169d575138d9a4ec640f43843fcf18d" exitCode=0 Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.542631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerDied","Data":"df3e4a36191913fcc5399342650eca57c169d575138d9a4ec640f43843fcf18d"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.542720 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"aecd6adb72484faf38e4ca2dcdfc121a029183eddbba8c40207f37b571654120"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.546211 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.562937 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podStartSLOduration=3.98206067 podStartE2EDuration="8.562914631s" podCreationTimestamp="2026-02-26 14:22:39 +0000 UTC" firstStartedPulling="2026-02-26 14:22:40.957305276 +0000 UTC m=+539.430625799" lastFinishedPulling="2026-02-26 14:22:45.538159237 +0000 UTC m=+544.011479760" observedRunningTime="2026-02-26 14:22:47.555255961 +0000 UTC m=+546.028576504" watchObservedRunningTime="2026-02-26 14:22:47.562914631 +0000 UTC m=+546.036235154" Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.563002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"1dc460a0b3ac5cbbc4406c71ab3877639ee9508fdc13f12f1ad3543008729273"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.563323 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"4af93d77ec975ded9edf49a7665204f76fc4c649259c51e5531b6944b17bedfd"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.570617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" event={"ID":"4b6ea043-8b1b-45ed-8ac8-422d673444f8","Type":"ContainerStarted","Data":"3b50ef35090754ddd25cc7441491b4d39daec105150646fa35d0b5c484884f51"} Feb 26 14:22:47 crc kubenswrapper[4809]: I0226 14:22:47.578621 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" podStartSLOduration=2.605724376 podStartE2EDuration="4.578599571s" podCreationTimestamp="2026-02-26 14:22:43 +0000 UTC" firstStartedPulling="2026-02-26 14:22:44.996678284 +0000 UTC m=+543.469998807" lastFinishedPulling="2026-02-26 14:22:46.969553479 +0000 UTC m=+545.442874002" observedRunningTime="2026-02-26 14:22:47.577251032 +0000 UTC m=+546.050571575" watchObservedRunningTime="2026-02-26 14:22:47.578599571 +0000 UTC m=+546.051920104" Feb 26 14:22:48 crc kubenswrapper[4809]: I0226 14:22:48.582926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"8e5179f342f4d9d8a5b3c2c45e80f9846841ec3afe15fc7405d275d700482c20"} Feb 26 14:22:48 crc kubenswrapper[4809]: I0226 14:22:48.582983 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"29da76bc-c43e-4b4b-b3df-05ca2015e1e9","Type":"ContainerStarted","Data":"f854b25e7d8dcb3eee26ee20762fe99d1d39f5432a455198522cba26ea2de5f6"} Feb 26 14:22:48 crc kubenswrapper[4809]: I0226 14:22:48.593765 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" Feb 26 14:22:48 crc kubenswrapper[4809]: I0226 14:22:48.611225 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" podStartSLOduration=2.509773302 podStartE2EDuration="5.61120312s" podCreationTimestamp="2026-02-26 14:22:43 +0000 UTC" firstStartedPulling="2026-02-26 14:22:43.86878436 +0000 UTC m=+542.342104883" lastFinishedPulling="2026-02-26 14:22:46.970214178 +0000 UTC m=+545.443534701" observedRunningTime="2026-02-26 14:22:47.642695392 +0000 UTC m=+546.116015925" watchObservedRunningTime="2026-02-26 14:22:48.61120312 +0000 UTC m=+547.084523653" Feb 26 14:22:48 crc kubenswrapper[4809]: I0226 14:22:48.618506 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.914652575 podStartE2EDuration="10.618474598s" podCreationTimestamp="2026-02-26 14:22:38 +0000 UTC" firstStartedPulling="2026-02-26 14:22:39.829418712 +0000 UTC m=+538.302739235" lastFinishedPulling="2026-02-26 14:22:45.533240735 +0000 UTC m=+544.006561258" observedRunningTime="2026-02-26 14:22:48.614097613 +0000 UTC m=+547.087418166" watchObservedRunningTime="2026-02-26 14:22:48.618474598 +0000 UTC m=+547.091795121" Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"6c1ed18c94602cfdb961a76d9c2925e6451f4f8fee6d701151fe83e536976515"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604745 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"af8583a042b5ca51ec5ef048415ea0f80575848f33f97b532aa02cf576ebc262"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"b94a6ca28541921146dcce036a7148fc3644b82b84532414673d50c5f515dbbc"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604764 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"e36e2cc2d0f7d0da0fb613830c17f7a8d640c87cc5e7dfa4ef18ec43038aba5e"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"08cd7bbba3ddc8daf2838edc7186351b5eb029a522fd5a1a3ab070e224a96bf6"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.604781 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d80d3c46-edff-47e9-98e5-357fbc27f114","Type":"ContainerStarted","Data":"422b2a55d0e077ca4cd2dd6687e38ee327eb4b0bea990d5be723e14b5a2a8eaf"} Feb 26 14:22:51 crc kubenswrapper[4809]: I0226 14:22:51.647373 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.410923646 podStartE2EDuration="7.647344142s" podCreationTimestamp="2026-02-26 14:22:44 +0000 UTC" firstStartedPulling="2026-02-26 14:22:47.547889289 +0000 UTC m=+546.021209812" lastFinishedPulling="2026-02-26 14:22:50.784309785 +0000 UTC m=+549.257630308" observedRunningTime="2026-02-26 14:22:51.64167904 +0000 UTC m=+550.114999563" watchObservedRunningTime="2026-02-26 14:22:51.647344142 +0000 UTC m=+550.120664685" Feb 26 14:22:52 crc kubenswrapper[4809]: I0226 14:22:52.928612 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:52 crc kubenswrapper[4809]: I0226 14:22:52.928882 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:52 crc kubenswrapper[4809]: I0226 14:22:52.934836 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:53 crc kubenswrapper[4809]: I0226 14:22:53.619067 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:22:53 crc kubenswrapper[4809]: I0226 14:22:53.674418 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:22:54 crc kubenswrapper[4809]: I0226 14:22:54.526834 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:23:03 crc kubenswrapper[4809]: I0226 14:23:03.407817 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:23:03 crc kubenswrapper[4809]: I0226 14:23:03.408401 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:23:18 crc kubenswrapper[4809]: I0226 14:23:18.711228 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c2d27" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerName="console" containerID="cri-o://41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318" gracePeriod=15 Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.088687 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c2d27_b0a79f9d-2af2-4b36-aa5f-dddd41a12b74/console/0.log" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.088768 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.281485 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.281902 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.281950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.282038 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.282091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.282179 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gn5t\" (UniqueName: \"kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.282212 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca\") pod \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\" (UID: \"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74\") " Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.283977 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.283996 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config" (OuterVolumeSpecName: "console-config") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.284043 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca" (OuterVolumeSpecName: "service-ca") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.284049 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.288662 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.288928 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t" (OuterVolumeSpecName: "kube-api-access-6gn5t") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "kube-api-access-6gn5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.291057 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" (UID: "b0a79f9d-2af2-4b36-aa5f-dddd41a12b74"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383893 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383932 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383943 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gn5t\" (UniqueName: \"kubernetes.io/projected/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-kube-api-access-6gn5t\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383952 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383960 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383970 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.383981 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.791895 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c2d27_b0a79f9d-2af2-4b36-aa5f-dddd41a12b74/console/0.log" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.791952 4809 generic.go:334] "Generic (PLEG): container finished" podID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerID="41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318" exitCode=2 Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.791987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c2d27" event={"ID":"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74","Type":"ContainerDied","Data":"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318"} Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.792038 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c2d27" event={"ID":"b0a79f9d-2af2-4b36-aa5f-dddd41a12b74","Type":"ContainerDied","Data":"13b0be33912e838b93e9b5d3268309d401a2ccbcd360c8bfe18acfbcf815c1a6"} Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.792075 4809 scope.go:117] "RemoveContainer" containerID="41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.792250 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c2d27" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.814615 4809 scope.go:117] "RemoveContainer" containerID="41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318" Feb 26 14:23:19 crc kubenswrapper[4809]: E0226 14:23:19.815116 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318\": container with ID starting with 41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318 not found: ID does not exist" containerID="41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.815147 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318"} err="failed to get container status \"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318\": rpc error: code = NotFound desc = could not find container \"41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318\": container with ID starting with 41a51c97b00ffc6b5aa331f1db41c3417b244bcf69564b976f08cbdf0c995318 not found: ID does not exist" Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.827710 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:23:19 crc kubenswrapper[4809]: I0226 14:23:19.832500 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c2d27"] Feb 26 14:23:20 crc kubenswrapper[4809]: I0226 14:23:20.264636 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" path="/var/lib/kubelet/pods/b0a79f9d-2af2-4b36-aa5f-dddd41a12b74/volumes" Feb 26 14:23:23 crc kubenswrapper[4809]: I0226 14:23:23.417856 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:23:23 crc kubenswrapper[4809]: I0226 14:23:23.426434 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" Feb 26 14:23:44 crc kubenswrapper[4809]: I0226 14:23:44.527985 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:23:44 crc kubenswrapper[4809]: I0226 14:23:44.577234 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:23:44 crc kubenswrapper[4809]: I0226 14:23:44.989118 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 14:23:47 crc kubenswrapper[4809]: I0226 14:23:47.102728 4809 scope.go:117] "RemoveContainer" containerID="336983f357ca66f38878973ca5b297d225544cd2b0a3a733cc2de4976aad7f7e" Feb 26 14:23:58 crc kubenswrapper[4809]: I0226 14:23:58.878774 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:23:58 crc kubenswrapper[4809]: E0226 14:23:58.881198 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerName="console" Feb 26 14:23:58 crc kubenswrapper[4809]: I0226 14:23:58.881369 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerName="console" Feb 26 14:23:58 crc kubenswrapper[4809]: I0226 14:23:58.881706 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a79f9d-2af2-4b36-aa5f-dddd41a12b74" containerName="console" Feb 26 14:23:58 crc kubenswrapper[4809]: I0226 14:23:58.882585 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:58 crc kubenswrapper[4809]: I0226 14:23:58.901068 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045375 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045434 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045472 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045492 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sh9\" (UniqueName: \"kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045518 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.045545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.146891 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147352 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147443 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147538 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9sh9\" (UniqueName: \"kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147626 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.147701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.148039 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.148495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.148579 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.149383 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.153051 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.153911 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.164224 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9sh9\" (UniqueName: \"kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9\") pod \"console-76fc989f8f-jg8s9\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.198545 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:23:59 crc kubenswrapper[4809]: I0226 14:23:59.622368 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.061981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-jg8s9" event={"ID":"045c9e58-274e-4032-bbe3-4c63cdc9be1a","Type":"ContainerStarted","Data":"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483"} Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.062392 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-jg8s9" event={"ID":"045c9e58-274e-4032-bbe3-4c63cdc9be1a","Type":"ContainerStarted","Data":"234e1d9e2d995bfc3a06cfc7cb913b32320c2f4aaf003db759da7a2c6152b5a0"} Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.079312 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76fc989f8f-jg8s9" podStartSLOduration=2.079294253 podStartE2EDuration="2.079294253s" podCreationTimestamp="2026-02-26 14:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:24:00.079237062 +0000 UTC m=+618.552557585" watchObservedRunningTime="2026-02-26 14:24:00.079294253 +0000 UTC m=+618.552614776" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.142185 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535264-h9shg"] Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.143189 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.145708 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.145786 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.146719 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.160151 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-h9shg"] Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.259313 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnd65\" (UniqueName: \"kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65\") pod \"auto-csr-approver-29535264-h9shg\" (UID: \"868c3491-feae-4e59-bd9f-60b5ea306458\") " pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.361126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnd65\" (UniqueName: \"kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65\") pod \"auto-csr-approver-29535264-h9shg\" (UID: \"868c3491-feae-4e59-bd9f-60b5ea306458\") " pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.378526 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnd65\" (UniqueName: \"kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65\") pod \"auto-csr-approver-29535264-h9shg\" (UID: \"868c3491-feae-4e59-bd9f-60b5ea306458\") " pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.485076 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:00 crc kubenswrapper[4809]: I0226 14:24:00.905448 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-h9shg"] Feb 26 14:24:01 crc kubenswrapper[4809]: I0226 14:24:01.070105 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-h9shg" event={"ID":"868c3491-feae-4e59-bd9f-60b5ea306458","Type":"ContainerStarted","Data":"77a50451700c8d90874b3275d58872eed7634e2bde57c9d6d73688e4fdd54781"} Feb 26 14:24:02 crc kubenswrapper[4809]: I0226 14:24:02.078479 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-h9shg" event={"ID":"868c3491-feae-4e59-bd9f-60b5ea306458","Type":"ContainerStarted","Data":"cfe4fe28ce9fb920345eda1e92d945d4f23e23b9dc3d87a6d0193e41282004be"} Feb 26 14:24:02 crc kubenswrapper[4809]: I0226 14:24:02.093623 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535264-h9shg" podStartSLOduration=1.201867446 podStartE2EDuration="2.093604458s" podCreationTimestamp="2026-02-26 14:24:00 +0000 UTC" firstStartedPulling="2026-02-26 14:24:00.913906095 +0000 UTC m=+619.387226628" lastFinishedPulling="2026-02-26 14:24:01.805643117 +0000 UTC m=+620.278963640" observedRunningTime="2026-02-26 14:24:02.09194008 +0000 UTC m=+620.565260613" watchObservedRunningTime="2026-02-26 14:24:02.093604458 +0000 UTC m=+620.566924981" Feb 26 14:24:03 crc kubenswrapper[4809]: I0226 14:24:03.085717 4809 generic.go:334] "Generic (PLEG): container finished" podID="868c3491-feae-4e59-bd9f-60b5ea306458" containerID="cfe4fe28ce9fb920345eda1e92d945d4f23e23b9dc3d87a6d0193e41282004be" exitCode=0 Feb 26 14:24:03 crc kubenswrapper[4809]: I0226 14:24:03.085755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-h9shg" event={"ID":"868c3491-feae-4e59-bd9f-60b5ea306458","Type":"ContainerDied","Data":"cfe4fe28ce9fb920345eda1e92d945d4f23e23b9dc3d87a6d0193e41282004be"} Feb 26 14:24:04 crc kubenswrapper[4809]: I0226 14:24:04.392469 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:04 crc kubenswrapper[4809]: I0226 14:24:04.520846 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnd65\" (UniqueName: \"kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65\") pod \"868c3491-feae-4e59-bd9f-60b5ea306458\" (UID: \"868c3491-feae-4e59-bd9f-60b5ea306458\") " Feb 26 14:24:04 crc kubenswrapper[4809]: I0226 14:24:04.527363 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65" (OuterVolumeSpecName: "kube-api-access-mnd65") pod "868c3491-feae-4e59-bd9f-60b5ea306458" (UID: "868c3491-feae-4e59-bd9f-60b5ea306458"). InnerVolumeSpecName "kube-api-access-mnd65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:24:04 crc kubenswrapper[4809]: I0226 14:24:04.622124 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnd65\" (UniqueName: \"kubernetes.io/projected/868c3491-feae-4e59-bd9f-60b5ea306458-kube-api-access-mnd65\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:05 crc kubenswrapper[4809]: I0226 14:24:05.103358 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535264-h9shg" event={"ID":"868c3491-feae-4e59-bd9f-60b5ea306458","Type":"ContainerDied","Data":"77a50451700c8d90874b3275d58872eed7634e2bde57c9d6d73688e4fdd54781"} Feb 26 14:24:05 crc kubenswrapper[4809]: I0226 14:24:05.103758 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a50451700c8d90874b3275d58872eed7634e2bde57c9d6d73688e4fdd54781" Feb 26 14:24:05 crc kubenswrapper[4809]: I0226 14:24:05.103459 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535264-h9shg" Feb 26 14:24:05 crc kubenswrapper[4809]: I0226 14:24:05.179346 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-vds27"] Feb 26 14:24:05 crc kubenswrapper[4809]: I0226 14:24:05.184884 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535258-vds27"] Feb 26 14:24:06 crc kubenswrapper[4809]: I0226 14:24:06.269429 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2d2454-f66e-44f5-82c7-00a32b77db8a" path="/var/lib/kubelet/pods/7f2d2454-f66e-44f5-82c7-00a32b77db8a/volumes" Feb 26 14:24:09 crc kubenswrapper[4809]: I0226 14:24:09.200236 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:24:09 crc kubenswrapper[4809]: I0226 14:24:09.200591 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:24:09 crc kubenswrapper[4809]: I0226 14:24:09.204865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:24:10 crc kubenswrapper[4809]: I0226 14:24:10.147070 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:24:10 crc kubenswrapper[4809]: I0226 14:24:10.204814 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.248636 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-585f6845b6-lfxrr" podUID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" containerName="console" containerID="cri-o://c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47" gracePeriod=15 Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.586239 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-585f6845b6-lfxrr_ec59d446-ee11-4fee-b752-b6f6c2de1da7/console/0.log" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.586304 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.756875 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.756927 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.756976 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.757053 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.757162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.757230 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.757270 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29cn\" (UniqueName: \"kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn\") pod \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\" (UID: \"ec59d446-ee11-4fee-b752-b6f6c2de1da7\") " Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.758291 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.758338 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config" (OuterVolumeSpecName: "console-config") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.758383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca" (OuterVolumeSpecName: "service-ca") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.758442 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.762799 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.763145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn" (OuterVolumeSpecName: "kube-api-access-h29cn") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "kube-api-access-h29cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.763173 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ec59d446-ee11-4fee-b752-b6f6c2de1da7" (UID: "ec59d446-ee11-4fee-b752-b6f6c2de1da7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859527 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859575 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859589 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859601 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859615 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ec59d446-ee11-4fee-b752-b6f6c2de1da7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859626 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec59d446-ee11-4fee-b752-b6f6c2de1da7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:35 crc kubenswrapper[4809]: I0226 14:24:35.859637 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29cn\" (UniqueName: \"kubernetes.io/projected/ec59d446-ee11-4fee-b752-b6f6c2de1da7-kube-api-access-h29cn\") on node \"crc\" DevicePath \"\"" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304313 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-585f6845b6-lfxrr_ec59d446-ee11-4fee-b752-b6f6c2de1da7/console/0.log" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304358 4809 generic.go:334] "Generic (PLEG): container finished" podID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" containerID="c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47" exitCode=2 Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304388 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-585f6845b6-lfxrr" event={"ID":"ec59d446-ee11-4fee-b752-b6f6c2de1da7","Type":"ContainerDied","Data":"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47"} Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-585f6845b6-lfxrr" event={"ID":"ec59d446-ee11-4fee-b752-b6f6c2de1da7","Type":"ContainerDied","Data":"80a144bb052f12df29c78dd4f41402bed851eebdc1f9f7935dbaac037542e398"} Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304434 4809 scope.go:117] "RemoveContainer" containerID="c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.304439 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-585f6845b6-lfxrr" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.325043 4809 scope.go:117] "RemoveContainer" containerID="c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.333078 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:24:36 crc kubenswrapper[4809]: E0226 14:24:36.333168 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47\": container with ID starting with c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47 not found: ID does not exist" containerID="c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.333212 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47"} err="failed to get container status \"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47\": rpc error: code = NotFound desc = could not find container \"c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47\": container with ID starting with c67ba49b9fec03361ba5c4658b0bf6c94cc32ef2dfc0d9ba1d3f14319db3bc47 not found: ID does not exist" Feb 26 14:24:36 crc kubenswrapper[4809]: I0226 14:24:36.338818 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-585f6845b6-lfxrr"] Feb 26 14:24:38 crc kubenswrapper[4809]: I0226 14:24:38.267489 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" path="/var/lib/kubelet/pods/ec59d446-ee11-4fee-b752-b6f6c2de1da7/volumes" Feb 26 14:24:41 crc kubenswrapper[4809]: I0226 14:24:41.794774 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:24:41 crc kubenswrapper[4809]: I0226 14:24:41.795357 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:24:47 crc kubenswrapper[4809]: I0226 14:24:47.163517 4809 scope.go:117] "RemoveContainer" containerID="bbbde2fa9a85e0f8569d13cce3214f943832fc2fcd73aff0947066f6b51495bd" Feb 26 14:25:11 crc kubenswrapper[4809]: I0226 14:25:11.794062 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:25:11 crc kubenswrapper[4809]: I0226 14:25:11.794638 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:25:41 crc kubenswrapper[4809]: I0226 14:25:41.794116 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:25:41 crc kubenswrapper[4809]: I0226 14:25:41.794699 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:25:41 crc kubenswrapper[4809]: I0226 14:25:41.794744 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:25:41 crc kubenswrapper[4809]: I0226 14:25:41.795342 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:25:41 crc kubenswrapper[4809]: I0226 14:25:41.795396 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00" gracePeriod=600 Feb 26 14:25:42 crc kubenswrapper[4809]: I0226 14:25:42.705750 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00" exitCode=0 Feb 26 14:25:42 crc kubenswrapper[4809]: I0226 14:25:42.705809 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00"} Feb 26 14:25:42 crc kubenswrapper[4809]: I0226 14:25:42.706372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280"} Feb 26 14:25:42 crc kubenswrapper[4809]: I0226 14:25:42.706401 4809 scope.go:117] "RemoveContainer" containerID="8b029a781da04d5b599ae4d78e518a33cee01500e62bd25a1f0c9b49fc9817ed" Feb 26 14:25:47 crc kubenswrapper[4809]: I0226 14:25:47.251070 4809 scope.go:117] "RemoveContainer" containerID="19bce38b2ebc193c5058edb2495c4e5fdb2d01f2cc7d055f9c087810f461ba65" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.142007 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535266-hwbtt"] Feb 26 14:26:00 crc kubenswrapper[4809]: E0226 14:26:00.143037 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868c3491-feae-4e59-bd9f-60b5ea306458" containerName="oc" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143060 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="868c3491-feae-4e59-bd9f-60b5ea306458" containerName="oc" Feb 26 14:26:00 crc kubenswrapper[4809]: E0226 14:26:00.143092 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" containerName="console" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143104 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" containerName="console" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143273 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="868c3491-feae-4e59-bd9f-60b5ea306458" containerName="oc" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143295 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec59d446-ee11-4fee-b752-b6f6c2de1da7" containerName="console" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143974 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.143970 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-hwbtt"] Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.148550 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.148867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.149033 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.264126 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67jh\" (UniqueName: \"kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh\") pod \"auto-csr-approver-29535266-hwbtt\" (UID: \"321cb915-1491-48c2-95a9-07d25d34d3cd\") " pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.366214 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67jh\" (UniqueName: \"kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh\") pod \"auto-csr-approver-29535266-hwbtt\" (UID: \"321cb915-1491-48c2-95a9-07d25d34d3cd\") " pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.383863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67jh\" (UniqueName: \"kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh\") pod \"auto-csr-approver-29535266-hwbtt\" (UID: \"321cb915-1491-48c2-95a9-07d25d34d3cd\") " pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.471957 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.668606 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-hwbtt"] Feb 26 14:26:00 crc kubenswrapper[4809]: I0226 14:26:00.840617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" event={"ID":"321cb915-1491-48c2-95a9-07d25d34d3cd","Type":"ContainerStarted","Data":"956e580e25b66cee9fc829bc85c5a31cdbfd437eaf373a7d3f167c030155013b"} Feb 26 14:26:02 crc kubenswrapper[4809]: I0226 14:26:02.855168 4809 generic.go:334] "Generic (PLEG): container finished" podID="321cb915-1491-48c2-95a9-07d25d34d3cd" containerID="24c9490053db79a15b9c8554014251d097965d77984cca65d207015db15eba90" exitCode=0 Feb 26 14:26:02 crc kubenswrapper[4809]: I0226 14:26:02.855266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" event={"ID":"321cb915-1491-48c2-95a9-07d25d34d3cd","Type":"ContainerDied","Data":"24c9490053db79a15b9c8554014251d097965d77984cca65d207015db15eba90"} Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.115590 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.245098 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67jh\" (UniqueName: \"kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh\") pod \"321cb915-1491-48c2-95a9-07d25d34d3cd\" (UID: \"321cb915-1491-48c2-95a9-07d25d34d3cd\") " Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.250364 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh" (OuterVolumeSpecName: "kube-api-access-f67jh") pod "321cb915-1491-48c2-95a9-07d25d34d3cd" (UID: "321cb915-1491-48c2-95a9-07d25d34d3cd"). InnerVolumeSpecName "kube-api-access-f67jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.346542 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67jh\" (UniqueName: \"kubernetes.io/projected/321cb915-1491-48c2-95a9-07d25d34d3cd-kube-api-access-f67jh\") on node \"crc\" DevicePath \"\"" Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.872153 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" event={"ID":"321cb915-1491-48c2-95a9-07d25d34d3cd","Type":"ContainerDied","Data":"956e580e25b66cee9fc829bc85c5a31cdbfd437eaf373a7d3f167c030155013b"} Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.872209 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535266-hwbtt" Feb 26 14:26:04 crc kubenswrapper[4809]: I0226 14:26:04.872228 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956e580e25b66cee9fc829bc85c5a31cdbfd437eaf373a7d3f167c030155013b" Feb 26 14:26:05 crc kubenswrapper[4809]: I0226 14:26:05.182282 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-26swq"] Feb 26 14:26:05 crc kubenswrapper[4809]: I0226 14:26:05.189843 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535260-26swq"] Feb 26 14:26:06 crc kubenswrapper[4809]: I0226 14:26:06.271163 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a84f2a5-2dad-4b84-944f-436bc29e98d5" path="/var/lib/kubelet/pods/8a84f2a5-2dad-4b84-944f-436bc29e98d5/volumes" Feb 26 14:27:47 crc kubenswrapper[4809]: I0226 14:27:47.316370 4809 scope.go:117] "RemoveContainer" containerID="49c41c2c5959e8c577af5777d675af4c277a67b777a1025e52327720e5a7bf21" Feb 26 14:27:51 crc kubenswrapper[4809]: I0226 14:27:51.333455 4809 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.147355 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535268-48gcc"] Feb 26 14:28:00 crc kubenswrapper[4809]: E0226 14:28:00.148985 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321cb915-1491-48c2-95a9-07d25d34d3cd" containerName="oc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.149045 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="321cb915-1491-48c2-95a9-07d25d34d3cd" containerName="oc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.149316 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="321cb915-1491-48c2-95a9-07d25d34d3cd" containerName="oc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.150265 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.153146 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.153629 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.159445 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-48gcc"] Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.161829 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.216385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvzp\" (UniqueName: \"kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp\") pod \"auto-csr-approver-29535268-48gcc\" (UID: \"821c1a70-89c5-433a-ae42-800d966fdbe2\") " pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.318257 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvzp\" (UniqueName: \"kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp\") pod \"auto-csr-approver-29535268-48gcc\" (UID: \"821c1a70-89c5-433a-ae42-800d966fdbe2\") " pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.342783 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvzp\" (UniqueName: \"kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp\") pod \"auto-csr-approver-29535268-48gcc\" (UID: \"821c1a70-89c5-433a-ae42-800d966fdbe2\") " pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.474555 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.872028 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-48gcc"] Feb 26 14:28:00 crc kubenswrapper[4809]: W0226 14:28:00.879979 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod821c1a70_89c5_433a_ae42_800d966fdbe2.slice/crio-5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f WatchSource:0}: Error finding container 5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f: Status 404 returned error can't find the container with id 5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f Feb 26 14:28:00 crc kubenswrapper[4809]: I0226 14:28:00.882773 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:28:01 crc kubenswrapper[4809]: I0226 14:28:01.633795 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-48gcc" event={"ID":"821c1a70-89c5-433a-ae42-800d966fdbe2","Type":"ContainerStarted","Data":"5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f"} Feb 26 14:28:03 crc kubenswrapper[4809]: I0226 14:28:03.673368 4809 generic.go:334] "Generic (PLEG): container finished" podID="821c1a70-89c5-433a-ae42-800d966fdbe2" containerID="59922ba4eba47d79d502158e9f929426733de1b7e1706263e6ade028c7f25244" exitCode=0 Feb 26 14:28:03 crc kubenswrapper[4809]: I0226 14:28:03.673454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-48gcc" event={"ID":"821c1a70-89c5-433a-ae42-800d966fdbe2","Type":"ContainerDied","Data":"59922ba4eba47d79d502158e9f929426733de1b7e1706263e6ade028c7f25244"} Feb 26 14:28:04 crc kubenswrapper[4809]: I0226 14:28:04.907038 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.088657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hvzp\" (UniqueName: \"kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp\") pod \"821c1a70-89c5-433a-ae42-800d966fdbe2\" (UID: \"821c1a70-89c5-433a-ae42-800d966fdbe2\") " Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.097033 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp" (OuterVolumeSpecName: "kube-api-access-6hvzp") pod "821c1a70-89c5-433a-ae42-800d966fdbe2" (UID: "821c1a70-89c5-433a-ae42-800d966fdbe2"). InnerVolumeSpecName "kube-api-access-6hvzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.190557 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hvzp\" (UniqueName: \"kubernetes.io/projected/821c1a70-89c5-433a-ae42-800d966fdbe2-kube-api-access-6hvzp\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.259996 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg"] Feb 26 14:28:05 crc kubenswrapper[4809]: E0226 14:28:05.260311 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821c1a70-89c5-433a-ae42-800d966fdbe2" containerName="oc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.260336 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="821c1a70-89c5-433a-ae42-800d966fdbe2" containerName="oc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.260496 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="821c1a70-89c5-433a-ae42-800d966fdbe2" containerName="oc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.261458 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.263827 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.275962 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg"] Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.393854 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.394129 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spgvw\" (UniqueName: \"kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.394253 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.495987 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.496223 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spgvw\" (UniqueName: \"kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.496331 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.497311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.497328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.513629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spgvw\" (UniqueName: \"kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.583549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.692733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535268-48gcc" event={"ID":"821c1a70-89c5-433a-ae42-800d966fdbe2","Type":"ContainerDied","Data":"5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f"} Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.694209 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5406a6817a19c8e159b0f01c5d0304ad5757e278c04167bf0b90035cba30c18f" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.693249 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535268-48gcc" Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.787885 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg"] Feb 26 14:28:05 crc kubenswrapper[4809]: W0226 14:28:05.788185 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fb66480_ee41_4b31_a0c8_3c0acc10701b.slice/crio-62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9 WatchSource:0}: Error finding container 62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9: Status 404 returned error can't find the container with id 62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9 Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.957762 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-x6crs"] Feb 26 14:28:05 crc kubenswrapper[4809]: I0226 14:28:05.962235 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535262-x6crs"] Feb 26 14:28:06 crc kubenswrapper[4809]: I0226 14:28:06.290742 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4664130-dc57-4baa-a6cc-17f864bfbe02" path="/var/lib/kubelet/pods/c4664130-dc57-4baa-a6cc-17f864bfbe02/volumes" Feb 26 14:28:06 crc kubenswrapper[4809]: I0226 14:28:06.701705 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerStarted","Data":"13abbfc428f594751b6d96be1575498436eddbd3a534dbed9a33077f5d92558a"} Feb 26 14:28:06 crc kubenswrapper[4809]: I0226 14:28:06.701765 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerStarted","Data":"62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9"} Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.617155 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.618275 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.634300 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.707946 4809 generic.go:334] "Generic (PLEG): container finished" podID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerID="13abbfc428f594751b6d96be1575498436eddbd3a534dbed9a33077f5d92558a" exitCode=0 Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.707991 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerDied","Data":"13abbfc428f594751b6d96be1575498436eddbd3a534dbed9a33077f5d92558a"} Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.730574 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prtqv\" (UniqueName: \"kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.730613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.730763 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.831713 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prtqv\" (UniqueName: \"kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.831780 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.831884 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.832515 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.832610 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.857067 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prtqv\" (UniqueName: \"kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv\") pod \"redhat-operators-t6bl4\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:07 crc kubenswrapper[4809]: I0226 14:28:07.936400 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:08 crc kubenswrapper[4809]: W0226 14:28:08.350583 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3162cb2_3846_48b8_af83_19fc19296b81.slice/crio-d4178c59f2f5487773356bfec3c87d53dc4e1b4683aeeb754ca6b0229cd3514d WatchSource:0}: Error finding container d4178c59f2f5487773356bfec3c87d53dc4e1b4683aeeb754ca6b0229cd3514d: Status 404 returned error can't find the container with id d4178c59f2f5487773356bfec3c87d53dc4e1b4683aeeb754ca6b0229cd3514d Feb 26 14:28:08 crc kubenswrapper[4809]: I0226 14:28:08.360484 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:08 crc kubenswrapper[4809]: I0226 14:28:08.745444 4809 generic.go:334] "Generic (PLEG): container finished" podID="e3162cb2-3846-48b8-af83-19fc19296b81" containerID="05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e" exitCode=0 Feb 26 14:28:08 crc kubenswrapper[4809]: I0226 14:28:08.745647 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerDied","Data":"05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e"} Feb 26 14:28:08 crc kubenswrapper[4809]: I0226 14:28:08.745741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerStarted","Data":"d4178c59f2f5487773356bfec3c87d53dc4e1b4683aeeb754ca6b0229cd3514d"} Feb 26 14:28:09 crc kubenswrapper[4809]: I0226 14:28:09.757976 4809 generic.go:334] "Generic (PLEG): container finished" podID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerID="ac12ca26fc868d5754ce0e84011842fe1cef3215d95bc59bbe7533c62914629b" exitCode=0 Feb 26 14:28:09 crc kubenswrapper[4809]: I0226 14:28:09.759307 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerDied","Data":"ac12ca26fc868d5754ce0e84011842fe1cef3215d95bc59bbe7533c62914629b"} Feb 26 14:28:10 crc kubenswrapper[4809]: E0226 14:28:10.511760 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3162cb2_3846_48b8_af83_19fc19296b81.slice/crio-62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3162cb2_3846_48b8_af83_19fc19296b81.slice/crio-conmon-62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e.scope\": RecentStats: unable to find data in memory cache]" Feb 26 14:28:10 crc kubenswrapper[4809]: I0226 14:28:10.771854 4809 generic.go:334] "Generic (PLEG): container finished" podID="e3162cb2-3846-48b8-af83-19fc19296b81" containerID="62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e" exitCode=0 Feb 26 14:28:10 crc kubenswrapper[4809]: I0226 14:28:10.771931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerDied","Data":"62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e"} Feb 26 14:28:10 crc kubenswrapper[4809]: I0226 14:28:10.778166 4809 generic.go:334] "Generic (PLEG): container finished" podID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerID="d9a6bf57b93449759062753556a723ea525b1cca2c2dba66ffc1f5c5e0cc08bd" exitCode=0 Feb 26 14:28:10 crc kubenswrapper[4809]: I0226 14:28:10.778215 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerDied","Data":"d9a6bf57b93449759062753556a723ea525b1cca2c2dba66ffc1f5c5e0cc08bd"} Feb 26 14:28:11 crc kubenswrapper[4809]: I0226 14:28:11.787229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerStarted","Data":"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e"} Feb 26 14:28:11 crc kubenswrapper[4809]: I0226 14:28:11.793599 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:28:11 crc kubenswrapper[4809]: I0226 14:28:11.793648 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:28:11 crc kubenswrapper[4809]: I0226 14:28:11.808442 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6bl4" podStartSLOduration=2.31844288 podStartE2EDuration="4.808424452s" podCreationTimestamp="2026-02-26 14:28:07 +0000 UTC" firstStartedPulling="2026-02-26 14:28:08.811738262 +0000 UTC m=+867.285058785" lastFinishedPulling="2026-02-26 14:28:11.301719834 +0000 UTC m=+869.775040357" observedRunningTime="2026-02-26 14:28:11.806501736 +0000 UTC m=+870.279822259" watchObservedRunningTime="2026-02-26 14:28:11.808424452 +0000 UTC m=+870.281744975" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.034824 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.107521 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spgvw\" (UniqueName: \"kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw\") pod \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.107620 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util\") pod \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.107657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle\") pod \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\" (UID: \"0fb66480-ee41-4b31-a0c8-3c0acc10701b\") " Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.110301 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle" (OuterVolumeSpecName: "bundle") pod "0fb66480-ee41-4b31-a0c8-3c0acc10701b" (UID: "0fb66480-ee41-4b31-a0c8-3c0acc10701b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.116612 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw" (OuterVolumeSpecName: "kube-api-access-spgvw") pod "0fb66480-ee41-4b31-a0c8-3c0acc10701b" (UID: "0fb66480-ee41-4b31-a0c8-3c0acc10701b"). InnerVolumeSpecName "kube-api-access-spgvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.130972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util" (OuterVolumeSpecName: "util") pod "0fb66480-ee41-4b31-a0c8-3c0acc10701b" (UID: "0fb66480-ee41-4b31-a0c8-3c0acc10701b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.209546 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spgvw\" (UniqueName: \"kubernetes.io/projected/0fb66480-ee41-4b31-a0c8-3c0acc10701b-kube-api-access-spgvw\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.209582 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.209591 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb66480-ee41-4b31-a0c8-3c0acc10701b-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.799338 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" event={"ID":"0fb66480-ee41-4b31-a0c8-3c0acc10701b","Type":"ContainerDied","Data":"62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9"} Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.799389 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ae480ab41dc4ce3f81c3a76d303326efb07df2db1adce7b12fc2df154789e9" Feb 26 14:28:12 crc kubenswrapper[4809]: I0226 14:28:12.799415 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg" Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.546360 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qwqmq"] Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547117 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-controller" containerID="cri-o://a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547216 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-node" containerID="cri-o://d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547270 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="sbdb" containerID="cri-o://5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547334 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547177 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="nbdb" containerID="cri-o://0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547366 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="northd" containerID="cri-o://2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.547276 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-acl-logging" containerID="cri-o://d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9" gracePeriod=30 Feb 26 14:28:16 crc kubenswrapper[4809]: I0226 14:28:16.583646 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" containerID="cri-o://4136f2637e699c68ac367d76cfbcc0365cba0606b4c0dd697df232fe0e5c0b77" gracePeriod=30 Feb 26 14:28:17 crc kubenswrapper[4809]: I0226 14:28:17.937506 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:17 crc kubenswrapper[4809]: I0226 14:28:17.938479 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:17 crc kubenswrapper[4809]: I0226 14:28:17.979595 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:18 crc kubenswrapper[4809]: I0226 14:28:18.919529 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.843854 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.845791 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-acl-logging/0.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846526 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846552 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846561 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9" exitCode=143 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846599 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:19.846651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.301828 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.853105 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/2.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.853830 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/1.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.853871 4809 generic.go:334] "Generic (PLEG): container finished" podID="9bca1e32-8331-4d7d-acf3-7ee31374c8bd" containerID="8e8d94bb545a2efa853b4d03334e9577ab1599686436650376bb4f50567df458" exitCode=2 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.853917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerDied","Data":"8e8d94bb545a2efa853b4d03334e9577ab1599686436650376bb4f50567df458"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.853948 4809 scope.go:117] "RemoveContainer" containerID="e365e2252d0f9b5b5e20cf96d98439318c048a1fc8d43f622c63af6dc17a6639" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.854404 4809 scope.go:117] "RemoveContainer" containerID="8e8d94bb545a2efa853b4d03334e9577ab1599686436650376bb4f50567df458" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.857619 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.860349 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-acl-logging/0.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.860711 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-controller/0.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868252 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="4136f2637e699c68ac367d76cfbcc0365cba0606b4c0dd697df232fe0e5c0b77" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868289 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868300 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868308 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888" exitCode=0 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868316 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerID="a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3" exitCode=143 Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868843 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"4136f2637e699c68ac367d76cfbcc0365cba0606b4c0dd697df232fe0e5c0b77"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868872 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868885 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" event={"ID":"4eaaa554-c5bb-455b-ad10-96f71caf4e26","Type":"ContainerDied","Data":"36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca"} Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.868925 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e1e04ef23abf8b8a547a0b31e69356b6345ffec40ad0f3b8b4a0783c59ecca" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.880929 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovnkube-controller/3.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.882361 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-acl-logging/0.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.882712 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-controller/0.log" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.882998 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.905783 4809 scope.go:117] "RemoveContainer" containerID="7d128d821e5ced084c07f515261b1ae3fc59a35c038206bc2a5286a1b5d5fb2f" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.971974 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972038 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972075 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972129 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972150 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972189 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972210 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972262 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972285 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972303 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972327 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972347 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972374 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swptd\" (UniqueName: \"kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972427 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972445 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972472 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972493 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.972511 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash\") pod \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\" (UID: \"4eaaa554-c5bb-455b-ad10-96f71caf4e26\") " Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.973309 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket" (OuterVolumeSpecName: "log-socket") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.973685 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.973719 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.973744 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.973767 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979123 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979198 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979512 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979546 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979570 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.979595 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.980110 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.980147 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.980171 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.983602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.983699 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.983739 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log" (OuterVolumeSpecName: "node-log") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.984849 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash" (OuterVolumeSpecName: "host-slash") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:20 crc kubenswrapper[4809]: I0226 14:28:20.994338 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd" (OuterVolumeSpecName: "kube-api-access-swptd") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "kube-api-access-swptd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025609 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2jlg8"] Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025834 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025848 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025856 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-node" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025863 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-node" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025873 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="nbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025880 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="nbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025889 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-acl-logging" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025895 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-acl-logging" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025903 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025909 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025915 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="sbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025921 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="sbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025928 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025934 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025942 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025948 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025956 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kubecfg-setup" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025961 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kubecfg-setup" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025971 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025976 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025984 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.025990 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.025996 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="extract" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026002 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="extract" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.026023 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="util" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026030 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="util" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.026043 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="northd" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026049 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="northd" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.026060 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="pull" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026065 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="pull" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026179 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026188 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026199 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-node" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026205 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="nbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026213 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026222 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026228 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="sbdb" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026240 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovn-acl-logging" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026247 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="northd" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026254 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026262 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb66480-ee41-4b31-a0c8-3c0acc10701b" containerName="extract" Feb 26 14:28:21 crc kubenswrapper[4809]: E0226 14:28:21.026370 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026377 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026480 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.026488 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" containerName="ovnkube-controller" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.037893 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.041962 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4eaaa554-c5bb-455b-ad10-96f71caf4e26" (UID: "4eaaa554-c5bb-455b-ad10-96f71caf4e26"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075792 4809 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075829 4809 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075841 4809 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075852 4809 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075864 4809 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075875 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075886 4809 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075897 4809 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075907 4809 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075916 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075928 4809 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075942 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075955 4809 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075967 4809 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075977 4809 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075990 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eaaa554-c5bb-455b-ad10-96f71caf4e26-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.075999 4809 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.076026 4809 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.076037 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swptd\" (UniqueName: \"kubernetes.io/projected/4eaaa554-c5bb-455b-ad10-96f71caf4e26-kube-api-access-swptd\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.076049 4809 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eaaa554-c5bb-455b-ad10-96f71caf4e26-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177701 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f0de330-1376-402e-910d-0029d3ff5534-ovn-node-metrics-cert\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-ovn\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-config\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-etc-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-netns\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177885 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-netd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177907 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177930 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-script-lib\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.177958 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-bin\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178024 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-env-overrides\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-var-lib-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-systemd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178102 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-log-socket\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178156 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-systemd-units\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178199 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-kubelet\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178233 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npk4k\" (UniqueName: \"kubernetes.io/projected/9f0de330-1376-402e-910d-0029d3ff5534-kube-api-access-npk4k\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178254 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178270 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-slash\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178287 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.178301 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-node-log\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.279909 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-systemd-units\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.279965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-kubelet\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.279995 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npk4k\" (UniqueName: \"kubernetes.io/projected/9f0de330-1376-402e-910d-0029d3ff5534-kube-api-access-npk4k\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280058 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-systemd-units\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280103 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-slash\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280107 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280076 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-slash\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-kubelet\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280157 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280181 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-node-log\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280254 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f0de330-1376-402e-910d-0029d3ff5534-ovn-node-metrics-cert\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280289 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-ovn\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280313 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-config\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280336 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-etc-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-netns\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280390 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-netd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280411 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280434 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-script-lib\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280467 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-bin\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280506 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-env-overrides\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280530 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-var-lib-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280552 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-systemd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280582 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-log-socket\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280663 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-log-socket\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280694 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-etc-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280721 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-netns\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280752 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-netd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280781 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-run-ovn-kubernetes\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281126 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-config\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281182 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-ovn\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.280290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-node-log\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281219 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-host-cni-bin\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-var-lib-openvswitch\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281266 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f0de330-1376-402e-910d-0029d3ff5534-run-systemd\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281275 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-env-overrides\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.281399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f0de330-1376-402e-910d-0029d3ff5534-ovnkube-script-lib\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.284271 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f0de330-1376-402e-910d-0029d3ff5534-ovn-node-metrics-cert\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.327732 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npk4k\" (UniqueName: \"kubernetes.io/projected/9f0de330-1376-402e-910d-0029d3ff5534-kube-api-access-npk4k\") pod \"ovnkube-node-2jlg8\" (UID: \"9f0de330-1376-402e-910d-0029d3ff5534\") " pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.359510 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:21 crc kubenswrapper[4809]: W0226 14:28:21.381117 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0de330_1376_402e_910d_0029d3ff5534.slice/crio-290dd155847afac3433c5dff33dd3d361880831baafe35a4ab02bc6b371315e9 WatchSource:0}: Error finding container 290dd155847afac3433c5dff33dd3d361880831baafe35a4ab02bc6b371315e9: Status 404 returned error can't find the container with id 290dd155847afac3433c5dff33dd3d361880831baafe35a4ab02bc6b371315e9 Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.887399 4809 generic.go:334] "Generic (PLEG): container finished" podID="9f0de330-1376-402e-910d-0029d3ff5534" containerID="95b11a31bf1ce849e43149d909e93a152a793e4af036761a2b1c8578da1e6ac2" exitCode=0 Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.887467 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerDied","Data":"95b11a31bf1ce849e43149d909e93a152a793e4af036761a2b1c8578da1e6ac2"} Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.887498 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"290dd155847afac3433c5dff33dd3d361880831baafe35a4ab02bc6b371315e9"} Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.892295 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-acl-logging/0.log" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.892705 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-controller/0.log" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.893160 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qwqmq" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.895860 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ccvqm_9bca1e32-8331-4d7d-acf3-7ee31374c8bd/kube-multus/2.log" Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.896085 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6bl4" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="registry-server" containerID="cri-o://f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e" gracePeriod=2 Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.896167 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ccvqm" event={"ID":"9bca1e32-8331-4d7d-acf3-7ee31374c8bd","Type":"ContainerStarted","Data":"0e6b8c54a25ed48fa85dcd956b6e87292923fa1b922d0dad318984ae684a1d04"} Feb 26 14:28:21 crc kubenswrapper[4809]: I0226 14:28:21.998613 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qwqmq"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.005652 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qwqmq"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.264950 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eaaa554-c5bb-455b-ad10-96f71caf4e26" path="/var/lib/kubelet/pods/4eaaa554-c5bb-455b-ad10-96f71caf4e26/volumes" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.394153 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.395055 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.400536 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.400537 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zsrz7" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.402919 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.497871 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9dp\" (UniqueName: \"kubernetes.io/projected/87348b90-199e-442d-a9ec-263588a8cc54-kube-api-access-8t9dp\") pod \"obo-prometheus-operator-68bc856cb9-h5gk4\" (UID: \"87348b90-199e-442d-a9ec-263588a8cc54\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.517896 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.518632 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.520866 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vnjxd" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.522666 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.535266 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.536465 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.599231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.599305 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9dp\" (UniqueName: \"kubernetes.io/projected/87348b90-199e-442d-a9ec-263588a8cc54-kube-api-access-8t9dp\") pod \"obo-prometheus-operator-68bc856cb9-h5gk4\" (UID: \"87348b90-199e-442d-a9ec-263588a8cc54\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.599385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.599403 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.599426 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.616997 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9dp\" (UniqueName: \"kubernetes.io/projected/87348b90-199e-442d-a9ec-263588a8cc54-kube-api-access-8t9dp\") pod \"obo-prometheus-operator-68bc856cb9-h5gk4\" (UID: \"87348b90-199e-442d-a9ec-263588a8cc54\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.633683 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qq6nr"] Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.634602 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.638490 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kdl4b" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.638694 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700097 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700130 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700152 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700177 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77zd\" (UniqueName: \"kubernetes.io/projected/cc062236-67aa-4219-8e13-45ff2cf44f8e-kube-api-access-d77zd\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700195 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.700241 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc062236-67aa-4219-8e13-45ff2cf44f8e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.703500 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.703849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a5ce7-761a-410a-84e8-41e01ad2b55e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw\" (UID: \"b56a5ce7-761a-410a-84e8-41e01ad2b55e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.705386 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.705714 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/906a26fc-9fb3-4964-8c39-ef42e4915be5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw\" (UID: \"906a26fc-9fb3-4964-8c39-ef42e4915be5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.711903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.720685 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.777808 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9tgqx"] Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.778092 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="extract-utilities" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.778108 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="extract-utilities" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.778121 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="registry-server" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.778128 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="registry-server" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.778140 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="extract-content" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.778148 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="extract-content" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.778273 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" containerName="registry-server" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.778737 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.783512 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gwphx" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.800857 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities\") pod \"e3162cb2-3846-48b8-af83-19fc19296b81\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.800928 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content\") pod \"e3162cb2-3846-48b8-af83-19fc19296b81\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.801079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prtqv\" (UniqueName: \"kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv\") pod \"e3162cb2-3846-48b8-af83-19fc19296b81\" (UID: \"e3162cb2-3846-48b8-af83-19fc19296b81\") " Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.801340 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77zd\" (UniqueName: \"kubernetes.io/projected/cc062236-67aa-4219-8e13-45ff2cf44f8e-kube-api-access-d77zd\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.801397 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc062236-67aa-4219-8e13-45ff2cf44f8e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.801819 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities" (OuterVolumeSpecName: "utilities") pod "e3162cb2-3846-48b8-af83-19fc19296b81" (UID: "e3162cb2-3846-48b8-af83-19fc19296b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.823226 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(047f10ab2574070184fd0eaeb5ec6acf26cf7e6cf3d7eaa7912c10a1356cc57b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.823296 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(047f10ab2574070184fd0eaeb5ec6acf26cf7e6cf3d7eaa7912c10a1356cc57b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.823317 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(047f10ab2574070184fd0eaeb5ec6acf26cf7e6cf3d7eaa7912c10a1356cc57b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.823355 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators(87348b90-199e-442d-a9ec-263588a8cc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators(87348b90-199e-442d-a9ec-263588a8cc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(047f10ab2574070184fd0eaeb5ec6acf26cf7e6cf3d7eaa7912c10a1356cc57b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" podUID="87348b90-199e-442d-a9ec-263588a8cc54" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.870856 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc062236-67aa-4219-8e13-45ff2cf44f8e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.871446 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv" (OuterVolumeSpecName: "kube-api-access-prtqv") pod "e3162cb2-3846-48b8-af83-19fc19296b81" (UID: "e3162cb2-3846-48b8-af83-19fc19296b81"). InnerVolumeSpecName "kube-api-access-prtqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.872824 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.874620 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.878425 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77zd\" (UniqueName: \"kubernetes.io/projected/cc062236-67aa-4219-8e13-45ff2cf44f8e-kube-api-access-d77zd\") pod \"observability-operator-59bdc8b94-qq6nr\" (UID: \"cc062236-67aa-4219-8e13-45ff2cf44f8e\") " pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.906813 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.907112 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpr9\" (UniqueName: \"kubernetes.io/projected/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-kube-api-access-vhpr9\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.907398 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prtqv\" (UniqueName: \"kubernetes.io/projected/e3162cb2-3846-48b8-af83-19fc19296b81-kube-api-access-prtqv\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.907428 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.933233 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(5c5b20c467414abbbacb830e609f4a48949a1aa6eac6f02f186b8f76bf8c7a58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.933318 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(5c5b20c467414abbbacb830e609f4a48949a1aa6eac6f02f186b8f76bf8c7a58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.933341 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(5c5b20c467414abbbacb830e609f4a48949a1aa6eac6f02f186b8f76bf8c7a58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.933402 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators(b56a5ce7-761a-410a-84e8-41e01ad2b55e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators(b56a5ce7-761a-410a-84e8-41e01ad2b55e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(5c5b20c467414abbbacb830e609f4a48949a1aa6eac6f02f186b8f76bf8c7a58): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" podUID="b56a5ce7-761a-410a-84e8-41e01ad2b55e" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.956023 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.960046 4809 generic.go:334] "Generic (PLEG): container finished" podID="e3162cb2-3846-48b8-af83-19fc19296b81" containerID="f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e" exitCode=0 Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.960135 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerDied","Data":"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e"} Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.960164 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6bl4" event={"ID":"e3162cb2-3846-48b8-af83-19fc19296b81","Type":"ContainerDied","Data":"d4178c59f2f5487773356bfec3c87d53dc4e1b4683aeeb754ca6b0229cd3514d"} Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.960180 4809 scope.go:117] "RemoveContainer" containerID="f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.960180 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6bl4" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.975076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"0c8b67448771a641019c7fdc502543b9c08f085c4b23a3d20676bafaecbe2954"} Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.975118 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"c689489cd905750b5a55d30c51d157e2397b929c9d8b87489588c99149770e3a"} Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.975128 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"e95f0538c4abffdc8d0219b49e45ae025ccac2d14af6b61096625abe60db4d38"} Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.975482 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(6e29134d81710151691b11773acd3c93ec668be81b1ed2ed383ffe6309e589f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.975553 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(6e29134d81710151691b11773acd3c93ec668be81b1ed2ed383ffe6309e589f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.975576 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(6e29134d81710151691b11773acd3c93ec668be81b1ed2ed383ffe6309e589f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:22 crc kubenswrapper[4809]: E0226 14:28:22.975625 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators(906a26fc-9fb3-4964-8c39-ef42e4915be5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators(906a26fc-9fb3-4964-8c39-ef42e4915be5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(6e29134d81710151691b11773acd3c93ec668be81b1ed2ed383ffe6309e589f3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" podUID="906a26fc-9fb3-4964-8c39-ef42e4915be5" Feb 26 14:28:22 crc kubenswrapper[4809]: I0226 14:28:22.994317 4809 scope.go:117] "RemoveContainer" containerID="62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.008813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.008919 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhpr9\" (UniqueName: \"kubernetes.io/projected/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-kube-api-access-vhpr9\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.016807 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.021472 4809 scope.go:117] "RemoveContainer" containerID="05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.027856 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(bd208faec7931de784bc55621f443e1db126ac76c865a2d37035a0870247cceb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.027923 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(bd208faec7931de784bc55621f443e1db126ac76c865a2d37035a0870247cceb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.027952 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(bd208faec7931de784bc55621f443e1db126ac76c865a2d37035a0870247cceb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.028001 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-qq6nr_openshift-operators(cc062236-67aa-4219-8e13-45ff2cf44f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-qq6nr_openshift-operators(cc062236-67aa-4219-8e13-45ff2cf44f8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(bd208faec7931de784bc55621f443e1db126ac76c865a2d37035a0870247cceb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.032985 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhpr9\" (UniqueName: \"kubernetes.io/projected/bb918b49-7bc0-40e4-b7a7-a4ab671e7911-kube-api-access-vhpr9\") pod \"perses-operator-5bf474d74f-9tgqx\" (UID: \"bb918b49-7bc0-40e4-b7a7-a4ab671e7911\") " pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.036953 4809 scope.go:117] "RemoveContainer" containerID="f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.037318 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e\": container with ID starting with f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e not found: ID does not exist" containerID="f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.037351 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e"} err="failed to get container status \"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e\": rpc error: code = NotFound desc = could not find container \"f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e\": container with ID starting with f7cedda780f73bb97f3dd9d44c42317b98168eb7e902edf778c33db5f096753e not found: ID does not exist" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.037373 4809 scope.go:117] "RemoveContainer" containerID="62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.037536 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e\": container with ID starting with 62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e not found: ID does not exist" containerID="62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.037553 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e"} err="failed to get container status \"62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e\": rpc error: code = NotFound desc = could not find container \"62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e\": container with ID starting with 62e6b1d7e0637c455391be91a19a622fa7ca189a3f9e5cdb56f90714eeb71b7e not found: ID does not exist" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.037567 4809 scope.go:117] "RemoveContainer" containerID="05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.037766 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e\": container with ID starting with 05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e not found: ID does not exist" containerID="05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.037784 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e"} err="failed to get container status \"05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e\": rpc error: code = NotFound desc = could not find container \"05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e\": container with ID starting with 05ea8fefe7db2155f8f619f976815aa7d6fb187cbadf6af48935de5e9aa36f4e not found: ID does not exist" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.043415 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3162cb2-3846-48b8-af83-19fc19296b81" (UID: "e3162cb2-3846-48b8-af83-19fc19296b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.110625 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3162cb2-3846-48b8-af83-19fc19296b81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.233647 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.258025 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(a0fc1e8da2b11e7a322b49854c92fad145daeed49e083d8760e7b40a91793aa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.258087 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(a0fc1e8da2b11e7a322b49854c92fad145daeed49e083d8760e7b40a91793aa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.258116 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(a0fc1e8da2b11e7a322b49854c92fad145daeed49e083d8760e7b40a91793aa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:23 crc kubenswrapper[4809]: E0226 14:28:23.258159 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9tgqx_openshift-operators(bb918b49-7bc0-40e4-b7a7-a4ab671e7911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9tgqx_openshift-operators(bb918b49-7bc0-40e4-b7a7-a4ab671e7911)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(a0fc1e8da2b11e7a322b49854c92fad145daeed49e083d8760e7b40a91793aa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podUID="bb918b49-7bc0-40e4-b7a7-a4ab671e7911" Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.296300 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.304723 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6bl4"] Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.984947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"41d6d10aef501f4b0de30f196be3a49d6bdc862c4f00c3f257195d7c00a05318"} Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.986194 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"d7e87888e6cd942ecbe9e97b8733f9978236d11f1b9a664049a1142d40636c07"} Feb 26 14:28:23 crc kubenswrapper[4809]: I0226 14:28:23.986241 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"87836ab0bd17f42f4da2e1ecd2986003d9763aa277414883eb3e8d1ef92cae8d"} Feb 26 14:28:24 crc kubenswrapper[4809]: I0226 14:28:24.264565 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3162cb2-3846-48b8-af83-19fc19296b81" path="/var/lib/kubelet/pods/e3162cb2-3846-48b8-af83-19fc19296b81/volumes" Feb 26 14:28:26 crc kubenswrapper[4809]: I0226 14:28:26.004297 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"3044bbaa065052b0e1679c117113e1127c8516c72fe2f8cf152115e5081b5d25"} Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.026750 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" event={"ID":"9f0de330-1376-402e-910d-0029d3ff5534","Type":"ContainerStarted","Data":"1e248c1b52ef993471049df25a672a2daad606ded8b5b49c2a80ce346b96b5cc"} Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.027140 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.027162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.027695 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.055822 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.059769 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" podStartSLOduration=9.059749536 podStartE2EDuration="9.059749536s" podCreationTimestamp="2026-02-26 14:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:28:29.05715056 +0000 UTC m=+887.530471103" watchObservedRunningTime="2026-02-26 14:28:29.059749536 +0000 UTC m=+887.533070059" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.077046 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.405106 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw"] Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.405531 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.406067 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.407975 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9tgqx"] Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.408120 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.408666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.425206 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4"] Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.425370 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.425872 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.451157 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw"] Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.451275 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.451681 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.458401 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(006386e55b90c95ba7183fb999c42dbcfdd7e771bdef7340099b4f565bd28264): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.458455 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(006386e55b90c95ba7183fb999c42dbcfdd7e771bdef7340099b4f565bd28264): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.458478 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(006386e55b90c95ba7183fb999c42dbcfdd7e771bdef7340099b4f565bd28264): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.458526 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators(b56a5ce7-761a-410a-84e8-41e01ad2b55e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators(b56a5ce7-761a-410a-84e8-41e01ad2b55e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_openshift-operators_b56a5ce7-761a-410a-84e8-41e01ad2b55e_0(006386e55b90c95ba7183fb999c42dbcfdd7e771bdef7340099b4f565bd28264): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" podUID="b56a5ce7-761a-410a-84e8-41e01ad2b55e" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.478274 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qq6nr"] Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.478388 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:29 crc kubenswrapper[4809]: I0226 14:28:29.478885 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.597250 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(0203cd86e8a7b7224f611817cb0816a8d3400867d6ad7bcfc3295605a4ba35dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.597304 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(0203cd86e8a7b7224f611817cb0816a8d3400867d6ad7bcfc3295605a4ba35dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.597328 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(0203cd86e8a7b7224f611817cb0816a8d3400867d6ad7bcfc3295605a4ba35dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.597393 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators(87348b90-199e-442d-a9ec-263588a8cc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators(87348b90-199e-442d-a9ec-263588a8cc54)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h5gk4_openshift-operators_87348b90-199e-442d-a9ec-263588a8cc54_0(0203cd86e8a7b7224f611817cb0816a8d3400867d6ad7bcfc3295605a4ba35dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" podUID="87348b90-199e-442d-a9ec-263588a8cc54" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.605430 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(e4e7ff29db3f66f6d66e0d978199fa62f816f327bf39eb9b05152e62ba406ce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.605479 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(e4e7ff29db3f66f6d66e0d978199fa62f816f327bf39eb9b05152e62ba406ce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.605514 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(e4e7ff29db3f66f6d66e0d978199fa62f816f327bf39eb9b05152e62ba406ce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.605549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-qq6nr_openshift-operators(cc062236-67aa-4219-8e13-45ff2cf44f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-qq6nr_openshift-operators(cc062236-67aa-4219-8e13-45ff2cf44f8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-qq6nr_openshift-operators_cc062236-67aa-4219-8e13-45ff2cf44f8e_0(e4e7ff29db3f66f6d66e0d978199fa62f816f327bf39eb9b05152e62ba406ce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.613779 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(713210becc570170115462e6e256bda3066769e120412f0742ba5db2be7a48bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.613844 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(713210becc570170115462e6e256bda3066769e120412f0742ba5db2be7a48bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.613869 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(713210becc570170115462e6e256bda3066769e120412f0742ba5db2be7a48bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.613926 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9tgqx_openshift-operators(bb918b49-7bc0-40e4-b7a7-a4ab671e7911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9tgqx_openshift-operators(bb918b49-7bc0-40e4-b7a7-a4ab671e7911)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9tgqx_openshift-operators_bb918b49-7bc0-40e4-b7a7-a4ab671e7911_0(713210becc570170115462e6e256bda3066769e120412f0742ba5db2be7a48bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podUID="bb918b49-7bc0-40e4-b7a7-a4ab671e7911" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.627135 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(c67bd5fcc02e55c1d0ead4be36163a841a26fccdd8dee170737def2506ea2227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.627192 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(c67bd5fcc02e55c1d0ead4be36163a841a26fccdd8dee170737def2506ea2227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.627209 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(c67bd5fcc02e55c1d0ead4be36163a841a26fccdd8dee170737def2506ea2227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:29 crc kubenswrapper[4809]: E0226 14:28:29.627247 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators(906a26fc-9fb3-4964-8c39-ef42e4915be5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators(906a26fc-9fb3-4964-8c39-ef42e4915be5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_openshift-operators_906a26fc-9fb3-4964-8c39-ef42e4915be5_0(c67bd5fcc02e55c1d0ead4be36163a841a26fccdd8dee170737def2506ea2227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" podUID="906a26fc-9fb3-4964-8c39-ef42e4915be5" Feb 26 14:28:40 crc kubenswrapper[4809]: I0226 14:28:40.256599 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:40 crc kubenswrapper[4809]: I0226 14:28:40.257488 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" Feb 26 14:28:40 crc kubenswrapper[4809]: I0226 14:28:40.451013 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4"] Feb 26 14:28:40 crc kubenswrapper[4809]: W0226 14:28:40.457486 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87348b90_199e_442d_a9ec_263588a8cc54.slice/crio-a9c4eadec8dd9bb25071c05d11ca8e06469a81d39213234e2c3f95a98e04e352 WatchSource:0}: Error finding container a9c4eadec8dd9bb25071c05d11ca8e06469a81d39213234e2c3f95a98e04e352: Status 404 returned error can't find the container with id a9c4eadec8dd9bb25071c05d11ca8e06469a81d39213234e2c3f95a98e04e352 Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.114057 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" event={"ID":"87348b90-199e-442d-a9ec-263588a8cc54","Type":"ContainerStarted","Data":"a9c4eadec8dd9bb25071c05d11ca8e06469a81d39213234e2c3f95a98e04e352"} Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.255703 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.256347 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.558564 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw"] Feb 26 14:28:41 crc kubenswrapper[4809]: W0226 14:28:41.566303 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906a26fc_9fb3_4964_8c39_ef42e4915be5.slice/crio-5f9954477b70d35a7cb4f897f17a855559050bf8e963c835d8a14934f5f6308f WatchSource:0}: Error finding container 5f9954477b70d35a7cb4f897f17a855559050bf8e963c835d8a14934f5f6308f: Status 404 returned error can't find the container with id 5f9954477b70d35a7cb4f897f17a855559050bf8e963c835d8a14934f5f6308f Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.793650 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:28:41 crc kubenswrapper[4809]: I0226 14:28:41.793751 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:28:42 crc kubenswrapper[4809]: I0226 14:28:42.123365 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" event={"ID":"906a26fc-9fb3-4964-8c39-ef42e4915be5","Type":"ContainerStarted","Data":"5f9954477b70d35a7cb4f897f17a855559050bf8e963c835d8a14934f5f6308f"} Feb 26 14:28:42 crc kubenswrapper[4809]: I0226 14:28:42.255686 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:42 crc kubenswrapper[4809]: I0226 14:28:42.261898 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:43 crc kubenswrapper[4809]: I0226 14:28:43.256429 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:43 crc kubenswrapper[4809]: I0226 14:28:43.257072 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:44 crc kubenswrapper[4809]: I0226 14:28:44.256386 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:44 crc kubenswrapper[4809]: I0226 14:28:44.257317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" Feb 26 14:28:44 crc kubenswrapper[4809]: I0226 14:28:44.284343 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qq6nr"] Feb 26 14:28:44 crc kubenswrapper[4809]: W0226 14:28:44.291113 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc062236_67aa_4219_8e13_45ff2cf44f8e.slice/crio-c824fd3baac7a2dae127497eba9a942a455ce8b4bf2f3e8d7cdb1babadbf9e23 WatchSource:0}: Error finding container c824fd3baac7a2dae127497eba9a942a455ce8b4bf2f3e8d7cdb1babadbf9e23: Status 404 returned error can't find the container with id c824fd3baac7a2dae127497eba9a942a455ce8b4bf2f3e8d7cdb1babadbf9e23 Feb 26 14:28:44 crc kubenswrapper[4809]: I0226 14:28:44.353899 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9tgqx"] Feb 26 14:28:44 crc kubenswrapper[4809]: I0226 14:28:44.659731 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw"] Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.146511 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" event={"ID":"b56a5ce7-761a-410a-84e8-41e01ad2b55e","Type":"ContainerStarted","Data":"2717d1e718dad996f0c4ba46466578341a28f8b29e371ba6d1127892d98dd5ed"} Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.149949 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" event={"ID":"cc062236-67aa-4219-8e13-45ff2cf44f8e","Type":"ContainerStarted","Data":"c824fd3baac7a2dae127497eba9a942a455ce8b4bf2f3e8d7cdb1babadbf9e23"} Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.154466 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" event={"ID":"906a26fc-9fb3-4964-8c39-ef42e4915be5","Type":"ContainerStarted","Data":"6ffe2df578201420851c8d31eb4aff8bbdfbd1e3eab1a7f7a19caeadad11275e"} Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.162710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" event={"ID":"87348b90-199e-442d-a9ec-263588a8cc54","Type":"ContainerStarted","Data":"3c927807a7c6af70262e3d927c360f8209af89a2f9fcbf7b8eb0361da611bb6d"} Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.169398 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" event={"ID":"bb918b49-7bc0-40e4-b7a7-a4ab671e7911","Type":"ContainerStarted","Data":"7bb1034d881dd7d1587af61aee86ab86e52b16131159dfa6bc2f9982c0ef3242"} Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.196583 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw" podStartSLOduration=19.848659839 podStartE2EDuration="23.196553105s" podCreationTimestamp="2026-02-26 14:28:22 +0000 UTC" firstStartedPulling="2026-02-26 14:28:41.569269495 +0000 UTC m=+900.042590018" lastFinishedPulling="2026-02-26 14:28:44.917162761 +0000 UTC m=+903.390483284" observedRunningTime="2026-02-26 14:28:45.177931112 +0000 UTC m=+903.651251695" watchObservedRunningTime="2026-02-26 14:28:45.196553105 +0000 UTC m=+903.669873648" Feb 26 14:28:45 crc kubenswrapper[4809]: I0226 14:28:45.215906 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h5gk4" podStartSLOduration=19.770057887 podStartE2EDuration="23.215881348s" podCreationTimestamp="2026-02-26 14:28:22 +0000 UTC" firstStartedPulling="2026-02-26 14:28:40.460660453 +0000 UTC m=+898.933980976" lastFinishedPulling="2026-02-26 14:28:43.906483914 +0000 UTC m=+902.379804437" observedRunningTime="2026-02-26 14:28:45.212250352 +0000 UTC m=+903.685570895" watchObservedRunningTime="2026-02-26 14:28:45.215881348 +0000 UTC m=+903.689201881" Feb 26 14:28:46 crc kubenswrapper[4809]: I0226 14:28:46.177990 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" event={"ID":"b56a5ce7-761a-410a-84e8-41e01ad2b55e","Type":"ContainerStarted","Data":"32c69554a2e9986384153dc5f3b34c015e084f2b327a36b85454843808bfbc55"} Feb 26 14:28:46 crc kubenswrapper[4809]: I0226 14:28:46.199705 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw" podStartSLOduration=23.464664889 podStartE2EDuration="24.199691171s" podCreationTimestamp="2026-02-26 14:28:22 +0000 UTC" firstStartedPulling="2026-02-26 14:28:44.801233933 +0000 UTC m=+903.274554456" lastFinishedPulling="2026-02-26 14:28:45.536260215 +0000 UTC m=+904.009580738" observedRunningTime="2026-02-26 14:28:46.197755465 +0000 UTC m=+904.671075988" watchObservedRunningTime="2026-02-26 14:28:46.199691171 +0000 UTC m=+904.673011684" Feb 26 14:28:47 crc kubenswrapper[4809]: I0226 14:28:47.185442 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" event={"ID":"bb918b49-7bc0-40e4-b7a7-a4ab671e7911","Type":"ContainerStarted","Data":"aa6c70dc412648e6707e7dfc0b667891932836581911d1af38e3606c079cb357"} Feb 26 14:28:47 crc kubenswrapper[4809]: I0226 14:28:47.203360 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podStartSLOduration=23.25160127 podStartE2EDuration="25.203341844s" podCreationTimestamp="2026-02-26 14:28:22 +0000 UTC" firstStartedPulling="2026-02-26 14:28:44.803563781 +0000 UTC m=+903.276884304" lastFinishedPulling="2026-02-26 14:28:46.755304355 +0000 UTC m=+905.228624878" observedRunningTime="2026-02-26 14:28:47.200220823 +0000 UTC m=+905.673541346" watchObservedRunningTime="2026-02-26 14:28:47.203341844 +0000 UTC m=+905.676662367" Feb 26 14:28:47 crc kubenswrapper[4809]: I0226 14:28:47.387154 4809 scope.go:117] "RemoveContainer" containerID="5081a02acf3ea1d3ab730be6cfc1fadb0ad4c0ef5533076b3d2393d26d096929" Feb 26 14:28:48 crc kubenswrapper[4809]: I0226 14:28:48.191978 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.027640 4809 scope.go:117] "RemoveContainer" containerID="0d9611ff1762d9022ebc8d0f14f24ae69376cb3effb89661db7ce452c648fb4a" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.051551 4809 scope.go:117] "RemoveContainer" containerID="2fce947f869686ab49f28f0a3dc0bd0b8aafd44ce70aacfc9b09d0b11b008888" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.067353 4809 scope.go:117] "RemoveContainer" containerID="38b28e6726535b95ba4355952d95a4d0613dde91b88d3d63b72c0b9edbeaabfa" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.085132 4809 scope.go:117] "RemoveContainer" containerID="587df762653423efcc2bdf6af91f043e774421cc6bf8d69a9c754eb467608b6f" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.128602 4809 scope.go:117] "RemoveContainer" containerID="4136f2637e699c68ac367d76cfbcc0365cba0606b4c0dd697df232fe0e5c0b77" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.199228 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-acl-logging/0.log" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.199780 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qwqmq_4eaaa554-c5bb-455b-ad10-96f71caf4e26/ovn-controller/0.log" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.312669 4809 scope.go:117] "RemoveContainer" containerID="d9e3e1b29a35b93ca22250417f741e21c5073e5202d2fbbca513f6a3bfc46d7f" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.331745 4809 scope.go:117] "RemoveContainer" containerID="a2a368366470286903613fe156324e15f97fc1c9de6105b56a7cd4941d63e8b3" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.354058 4809 scope.go:117] "RemoveContainer" containerID="d5bb739850459ff4116eb2c36743004887ddf66fd7e8500480001c7bbdf23dd9" Feb 26 14:28:49 crc kubenswrapper[4809]: I0226 14:28:49.389693 4809 scope.go:117] "RemoveContainer" containerID="16b8ebde9e61fc7f7db6fd4b4872e88fb510af6fac3a7674ddac30a072098923" Feb 26 14:28:50 crc kubenswrapper[4809]: I0226 14:28:50.206670 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" event={"ID":"cc062236-67aa-4219-8e13-45ff2cf44f8e","Type":"ContainerStarted","Data":"1251c45952722a083fe2a994c8aa6b7dc167515313e6b3ba4b0764af9adf27eb"} Feb 26 14:28:50 crc kubenswrapper[4809]: I0226 14:28:50.207581 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:50 crc kubenswrapper[4809]: I0226 14:28:50.209007 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" Feb 26 14:28:50 crc kubenswrapper[4809]: I0226 14:28:50.234480 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podStartSLOduration=23.196064186 podStartE2EDuration="28.234460886s" podCreationTimestamp="2026-02-26 14:28:22 +0000 UTC" firstStartedPulling="2026-02-26 14:28:44.294479013 +0000 UTC m=+902.767799536" lastFinishedPulling="2026-02-26 14:28:49.332875713 +0000 UTC m=+907.806196236" observedRunningTime="2026-02-26 14:28:50.230535725 +0000 UTC m=+908.703856258" watchObservedRunningTime="2026-02-26 14:28:50.234460886 +0000 UTC m=+908.707781409" Feb 26 14:28:51 crc kubenswrapper[4809]: I0226 14:28:51.397480 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2jlg8" Feb 26 14:28:53 crc kubenswrapper[4809]: I0226 14:28:53.236607 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.363513 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.364806 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.366659 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.366854 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tl6pt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.367360 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.376877 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-w4nlj"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.378163 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-w4nlj" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.382728 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-59r7f" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.388929 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.394572 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-w4nlj"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.414945 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bk5r6"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.416058 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.419542 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gfsvk" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.434576 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bk5r6"] Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.470067 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpn7p\" (UniqueName: \"kubernetes.io/projected/bd634336-09f5-4412-a619-3c59838d89c6-kube-api-access-dpn7p\") pod \"cert-manager-webhook-687f57d79b-bk5r6\" (UID: \"bd634336-09f5-4412-a619-3c59838d89c6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.470128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7nq\" (UniqueName: \"kubernetes.io/projected/379471df-cfa1-4a81-893b-f00d1ef56738-kube-api-access-jj7nq\") pod \"cert-manager-858654f9db-w4nlj\" (UID: \"379471df-cfa1-4a81-893b-f00d1ef56738\") " pod="cert-manager/cert-manager-858654f9db-w4nlj" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.470188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvn5\" (UniqueName: \"kubernetes.io/projected/8cdb2a93-aaed-4598-b78d-c8ba2a452c77-kube-api-access-xlvn5\") pod \"cert-manager-cainjector-cf98fcc89-lhqrt\" (UID: \"8cdb2a93-aaed-4598-b78d-c8ba2a452c77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.572001 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvn5\" (UniqueName: \"kubernetes.io/projected/8cdb2a93-aaed-4598-b78d-c8ba2a452c77-kube-api-access-xlvn5\") pod \"cert-manager-cainjector-cf98fcc89-lhqrt\" (UID: \"8cdb2a93-aaed-4598-b78d-c8ba2a452c77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.572136 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpn7p\" (UniqueName: \"kubernetes.io/projected/bd634336-09f5-4412-a619-3c59838d89c6-kube-api-access-dpn7p\") pod \"cert-manager-webhook-687f57d79b-bk5r6\" (UID: \"bd634336-09f5-4412-a619-3c59838d89c6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.572187 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7nq\" (UniqueName: \"kubernetes.io/projected/379471df-cfa1-4a81-893b-f00d1ef56738-kube-api-access-jj7nq\") pod \"cert-manager-858654f9db-w4nlj\" (UID: \"379471df-cfa1-4a81-893b-f00d1ef56738\") " pod="cert-manager/cert-manager-858654f9db-w4nlj" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.590820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7nq\" (UniqueName: \"kubernetes.io/projected/379471df-cfa1-4a81-893b-f00d1ef56738-kube-api-access-jj7nq\") pod \"cert-manager-858654f9db-w4nlj\" (UID: \"379471df-cfa1-4a81-893b-f00d1ef56738\") " pod="cert-manager/cert-manager-858654f9db-w4nlj" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.591947 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvn5\" (UniqueName: \"kubernetes.io/projected/8cdb2a93-aaed-4598-b78d-c8ba2a452c77-kube-api-access-xlvn5\") pod \"cert-manager-cainjector-cf98fcc89-lhqrt\" (UID: \"8cdb2a93-aaed-4598-b78d-c8ba2a452c77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.594312 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpn7p\" (UniqueName: \"kubernetes.io/projected/bd634336-09f5-4412-a619-3c59838d89c6-kube-api-access-dpn7p\") pod \"cert-manager-webhook-687f57d79b-bk5r6\" (UID: \"bd634336-09f5-4412-a619-3c59838d89c6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.683859 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.699913 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-w4nlj" Feb 26 14:28:59 crc kubenswrapper[4809]: I0226 14:28:59.736588 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.111527 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt"] Feb 26 14:29:00 crc kubenswrapper[4809]: W0226 14:29:00.113955 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cdb2a93_aaed_4598_b78d_c8ba2a452c77.slice/crio-8b083bbbbb144c02ad8589cf7bb019bbd4255a2631f9ebf279d4d3efd75642a0 WatchSource:0}: Error finding container 8b083bbbbb144c02ad8589cf7bb019bbd4255a2631f9ebf279d4d3efd75642a0: Status 404 returned error can't find the container with id 8b083bbbbb144c02ad8589cf7bb019bbd4255a2631f9ebf279d4d3efd75642a0 Feb 26 14:29:00 crc kubenswrapper[4809]: W0226 14:29:00.167321 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379471df_cfa1_4a81_893b_f00d1ef56738.slice/crio-d3a9f3a054454294f598db6afd9c3a1536fece5be40b201daf847046981a8b87 WatchSource:0}: Error finding container d3a9f3a054454294f598db6afd9c3a1536fece5be40b201daf847046981a8b87: Status 404 returned error can't find the container with id d3a9f3a054454294f598db6afd9c3a1536fece5be40b201daf847046981a8b87 Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.168003 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-w4nlj"] Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.224987 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bk5r6"] Feb 26 14:29:00 crc kubenswrapper[4809]: W0226 14:29:00.229510 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd634336_09f5_4412_a619_3c59838d89c6.slice/crio-46962394d88d63e4e753948621468ffa67b23fb5270a419929a17707a679a111 WatchSource:0}: Error finding container 46962394d88d63e4e753948621468ffa67b23fb5270a419929a17707a679a111: Status 404 returned error can't find the container with id 46962394d88d63e4e753948621468ffa67b23fb5270a419929a17707a679a111 Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.290822 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" event={"ID":"8cdb2a93-aaed-4598-b78d-c8ba2a452c77","Type":"ContainerStarted","Data":"8b083bbbbb144c02ad8589cf7bb019bbd4255a2631f9ebf279d4d3efd75642a0"} Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.292231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" event={"ID":"bd634336-09f5-4412-a619-3c59838d89c6","Type":"ContainerStarted","Data":"46962394d88d63e4e753948621468ffa67b23fb5270a419929a17707a679a111"} Feb 26 14:29:00 crc kubenswrapper[4809]: I0226 14:29:00.293271 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-w4nlj" event={"ID":"379471df-cfa1-4a81-893b-f00d1ef56738","Type":"ContainerStarted","Data":"d3a9f3a054454294f598db6afd9c3a1536fece5be40b201daf847046981a8b87"} Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.358134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-w4nlj" event={"ID":"379471df-cfa1-4a81-893b-f00d1ef56738","Type":"ContainerStarted","Data":"0a2736495e7a2d48d08605ad3c79ca11836f6ad0960b8ee5d3a3a4a59a24eb87"} Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.363581 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" event={"ID":"8cdb2a93-aaed-4598-b78d-c8ba2a452c77","Type":"ContainerStarted","Data":"4ebf955a3d1d0bba8d3d1490f9ec029bb43739959f00e6b02dbecb5ea122f91e"} Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.365106 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" event={"ID":"bd634336-09f5-4412-a619-3c59838d89c6","Type":"ContainerStarted","Data":"154bf662aab4dedd567885d3d2f9da92b5fa8a4e62fb9a87a27763c872b8fcaf"} Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.365546 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.377952 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-w4nlj" podStartSLOduration=1.875672266 podStartE2EDuration="10.377937583s" podCreationTimestamp="2026-02-26 14:28:59 +0000 UTC" firstStartedPulling="2026-02-26 14:29:00.169467457 +0000 UTC m=+918.642787990" lastFinishedPulling="2026-02-26 14:29:08.671732764 +0000 UTC m=+927.145053307" observedRunningTime="2026-02-26 14:29:09.376585165 +0000 UTC m=+927.849905688" watchObservedRunningTime="2026-02-26 14:29:09.377937583 +0000 UTC m=+927.851258106" Feb 26 14:29:09 crc kubenswrapper[4809]: I0226 14:29:09.396908 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lhqrt" podStartSLOduration=1.8388961560000001 podStartE2EDuration="10.396884519s" podCreationTimestamp="2026-02-26 14:28:59 +0000 UTC" firstStartedPulling="2026-02-26 14:29:00.115689386 +0000 UTC m=+918.589009909" lastFinishedPulling="2026-02-26 14:29:08.673677739 +0000 UTC m=+927.146998272" observedRunningTime="2026-02-26 14:29:09.39124965 +0000 UTC m=+927.864570173" watchObservedRunningTime="2026-02-26 14:29:09.396884519 +0000 UTC m=+927.870205042" Feb 26 14:29:11 crc kubenswrapper[4809]: I0226 14:29:11.793868 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:29:11 crc kubenswrapper[4809]: I0226 14:29:11.794177 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:29:11 crc kubenswrapper[4809]: I0226 14:29:11.794224 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:29:11 crc kubenswrapper[4809]: I0226 14:29:11.794880 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:29:11 crc kubenswrapper[4809]: I0226 14:29:11.794934 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280" gracePeriod=600 Feb 26 14:29:12 crc kubenswrapper[4809]: I0226 14:29:12.385288 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280" exitCode=0 Feb 26 14:29:12 crc kubenswrapper[4809]: I0226 14:29:12.385354 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280"} Feb 26 14:29:12 crc kubenswrapper[4809]: I0226 14:29:12.385549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9"} Feb 26 14:29:12 crc kubenswrapper[4809]: I0226 14:29:12.385571 4809 scope.go:117] "RemoveContainer" containerID="8d5d0c8b3d1f1dd1946501c2b01c21b8b723898c3ad72e881da694f2b8dbba00" Feb 26 14:29:12 crc kubenswrapper[4809]: I0226 14:29:12.402326 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" podStartSLOduration=4.896708951 podStartE2EDuration="13.402308892s" podCreationTimestamp="2026-02-26 14:28:59 +0000 UTC" firstStartedPulling="2026-02-26 14:29:00.231674156 +0000 UTC m=+918.704994679" lastFinishedPulling="2026-02-26 14:29:08.737274097 +0000 UTC m=+927.210594620" observedRunningTime="2026-02-26 14:29:09.414142557 +0000 UTC m=+927.887463110" watchObservedRunningTime="2026-02-26 14:29:12.402308892 +0000 UTC m=+930.875629415" Feb 26 14:29:14 crc kubenswrapper[4809]: I0226 14:29:14.742465 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.440686 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv"] Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.442363 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.451677 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv"] Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.457789 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.522310 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.522390 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkk5\" (UniqueName: \"kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.522597 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.623590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.623672 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlkk5\" (UniqueName: \"kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.623727 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.624211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.624242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.678118 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlkk5\" (UniqueName: \"kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.718190 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c"] Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.719821 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.742809 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c"] Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.764963 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.829603 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.829686 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.829714 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5vl\" (UniqueName: \"kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.930734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.931169 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5vl\" (UniqueName: \"kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.931262 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.931741 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.931757 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:42 crc kubenswrapper[4809]: I0226 14:29:42.972947 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5vl\" (UniqueName: \"kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.037932 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv"] Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.065450 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.267757 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c"] Feb 26 14:29:43 crc kubenswrapper[4809]: W0226 14:29:43.268691 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60de68b5_ae89_4301_a77c_9d52379551e1.slice/crio-2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b WatchSource:0}: Error finding container 2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b: Status 404 returned error can't find the container with id 2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.591882 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerStarted","Data":"6beac7f95007c8d6a3fddfc9516fff71da7572d71769f2073e7ea92af50dfd10"} Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.592209 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerStarted","Data":"fe27333556426ba56845839ee6f85fbd030299ae3e7284d5a52b88fd80e0a478"} Feb 26 14:29:43 crc kubenswrapper[4809]: I0226 14:29:43.593368 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" event={"ID":"60de68b5-ae89-4301-a77c-9d52379551e1","Type":"ContainerStarted","Data":"2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b"} Feb 26 14:29:44 crc kubenswrapper[4809]: I0226 14:29:44.600628 4809 generic.go:334] "Generic (PLEG): container finished" podID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerID="6beac7f95007c8d6a3fddfc9516fff71da7572d71769f2073e7ea92af50dfd10" exitCode=0 Feb 26 14:29:44 crc kubenswrapper[4809]: I0226 14:29:44.600710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerDied","Data":"6beac7f95007c8d6a3fddfc9516fff71da7572d71769f2073e7ea92af50dfd10"} Feb 26 14:29:44 crc kubenswrapper[4809]: I0226 14:29:44.603198 4809 generic.go:334] "Generic (PLEG): container finished" podID="60de68b5-ae89-4301-a77c-9d52379551e1" containerID="22858c8781e2d3410a2b2451dad0e575a59f7e8b61c75a620207e0af68074a4a" exitCode=0 Feb 26 14:29:44 crc kubenswrapper[4809]: I0226 14:29:44.603229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" event={"ID":"60de68b5-ae89-4301-a77c-9d52379551e1","Type":"ContainerDied","Data":"22858c8781e2d3410a2b2451dad0e575a59f7e8b61c75a620207e0af68074a4a"} Feb 26 14:29:46 crc kubenswrapper[4809]: I0226 14:29:46.626455 4809 generic.go:334] "Generic (PLEG): container finished" podID="60de68b5-ae89-4301-a77c-9d52379551e1" containerID="b587204cc32a5765769a7616ba6b2e5845e985eb7aa7fb0cbbc2172933349da4" exitCode=0 Feb 26 14:29:46 crc kubenswrapper[4809]: I0226 14:29:46.626763 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" event={"ID":"60de68b5-ae89-4301-a77c-9d52379551e1","Type":"ContainerDied","Data":"b587204cc32a5765769a7616ba6b2e5845e985eb7aa7fb0cbbc2172933349da4"} Feb 26 14:29:46 crc kubenswrapper[4809]: I0226 14:29:46.630319 4809 generic.go:334] "Generic (PLEG): container finished" podID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerID="0aa40ca058f4c83221aa156ff9ff4b745808636eb6768cfccf2423ed081ab132" exitCode=0 Feb 26 14:29:46 crc kubenswrapper[4809]: I0226 14:29:46.630367 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerDied","Data":"0aa40ca058f4c83221aa156ff9ff4b745808636eb6768cfccf2423ed081ab132"} Feb 26 14:29:47 crc kubenswrapper[4809]: I0226 14:29:47.638452 4809 generic.go:334] "Generic (PLEG): container finished" podID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerID="a7c733319c34e8b83cb20e3296843030c870dd9359dca556beecde0ad39f75c1" exitCode=0 Feb 26 14:29:47 crc kubenswrapper[4809]: I0226 14:29:47.638527 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerDied","Data":"a7c733319c34e8b83cb20e3296843030c870dd9359dca556beecde0ad39f75c1"} Feb 26 14:29:47 crc kubenswrapper[4809]: I0226 14:29:47.641703 4809 generic.go:334] "Generic (PLEG): container finished" podID="60de68b5-ae89-4301-a77c-9d52379551e1" containerID="7ef39c69f449c9d8517257a6f109e947689ac1de71e899675584171c2442d107" exitCode=0 Feb 26 14:29:47 crc kubenswrapper[4809]: I0226 14:29:47.641742 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" event={"ID":"60de68b5-ae89-4301-a77c-9d52379551e1","Type":"ContainerDied","Data":"7ef39c69f449c9d8517257a6f109e947689ac1de71e899675584171c2442d107"} Feb 26 14:29:48 crc kubenswrapper[4809]: I0226 14:29:48.945073 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:48 crc kubenswrapper[4809]: I0226 14:29:48.948333 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019555 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle\") pod \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019648 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util\") pod \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019723 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlkk5\" (UniqueName: \"kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5\") pod \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\" (UID: \"5ca418d2-a956-4ec3-95a0-9f69dea10a9f\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019748 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle\") pod \"60de68b5-ae89-4301-a77c-9d52379551e1\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019779 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util\") pod \"60de68b5-ae89-4301-a77c-9d52379551e1\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.019810 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5vl\" (UniqueName: \"kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl\") pod \"60de68b5-ae89-4301-a77c-9d52379551e1\" (UID: \"60de68b5-ae89-4301-a77c-9d52379551e1\") " Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.023901 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle" (OuterVolumeSpecName: "bundle") pod "60de68b5-ae89-4301-a77c-9d52379551e1" (UID: "60de68b5-ae89-4301-a77c-9d52379551e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.024729 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle" (OuterVolumeSpecName: "bundle") pod "5ca418d2-a956-4ec3-95a0-9f69dea10a9f" (UID: "5ca418d2-a956-4ec3-95a0-9f69dea10a9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.027472 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl" (OuterVolumeSpecName: "kube-api-access-fc5vl") pod "60de68b5-ae89-4301-a77c-9d52379551e1" (UID: "60de68b5-ae89-4301-a77c-9d52379551e1"). InnerVolumeSpecName "kube-api-access-fc5vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.027810 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5" (OuterVolumeSpecName: "kube-api-access-mlkk5") pod "5ca418d2-a956-4ec3-95a0-9f69dea10a9f" (UID: "5ca418d2-a956-4ec3-95a0-9f69dea10a9f"). InnerVolumeSpecName "kube-api-access-mlkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.049177 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util" (OuterVolumeSpecName: "util") pod "5ca418d2-a956-4ec3-95a0-9f69dea10a9f" (UID: "5ca418d2-a956-4ec3-95a0-9f69dea10a9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.121547 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.121582 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.121593 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlkk5\" (UniqueName: \"kubernetes.io/projected/5ca418d2-a956-4ec3-95a0-9f69dea10a9f-kube-api-access-mlkk5\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.121604 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.121613 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5vl\" (UniqueName: \"kubernetes.io/projected/60de68b5-ae89-4301-a77c-9d52379551e1-kube-api-access-fc5vl\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.468200 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util" (OuterVolumeSpecName: "util") pod "60de68b5-ae89-4301-a77c-9d52379551e1" (UID: "60de68b5-ae89-4301-a77c-9d52379551e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.527385 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60de68b5-ae89-4301-a77c-9d52379551e1-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.663111 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" event={"ID":"5ca418d2-a956-4ec3-95a0-9f69dea10a9f","Type":"ContainerDied","Data":"fe27333556426ba56845839ee6f85fbd030299ae3e7284d5a52b88fd80e0a478"} Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.663166 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe27333556426ba56845839ee6f85fbd030299ae3e7284d5a52b88fd80e0a478" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.663270 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.670353 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" event={"ID":"60de68b5-ae89-4301-a77c-9d52379551e1","Type":"ContainerDied","Data":"2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b"} Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.670438 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad073118a9f550e0851801d95156aa5a2bb11bad4d07f2319521f113953cf6b" Feb 26 14:29:49 crc kubenswrapper[4809]: I0226 14:29:49.670637 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.126682 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535270-cjtxt"] Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127579 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="pull" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127594 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="pull" Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127612 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="util" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127620 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="util" Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127632 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="util" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127641 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="util" Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127654 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127661 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127675 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="pull" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127682 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="pull" Feb 26 14:30:00 crc kubenswrapper[4809]: E0226 14:30:00.127704 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127710 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127844 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="60de68b5-ae89-4301-a77c-9d52379551e1" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.127863 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca418d2-a956-4ec3-95a0-9f69dea10a9f" containerName="extract" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.128414 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.130592 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.131205 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.131551 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.144684 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4"] Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.145919 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.147974 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.148764 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.151537 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-cjtxt"] Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.163457 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4"] Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.277061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.277108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgbj\" (UniqueName: \"kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj\") pod \"auto-csr-approver-29535270-cjtxt\" (UID: \"0a3bae85-a2b7-4dc7-9ef4-5001ea024453\") " pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.277165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.277281 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfx2q\" (UniqueName: \"kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.378717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.379174 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfx2q\" (UniqueName: \"kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.379301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.379335 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgbj\" (UniqueName: \"kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj\") pod \"auto-csr-approver-29535270-cjtxt\" (UID: \"0a3bae85-a2b7-4dc7-9ef4-5001ea024453\") " pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.380244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.386467 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.405270 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq"] Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.405486 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfx2q\" (UniqueName: \"kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q\") pod \"collect-profiles-29535270-g7xf4\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.406229 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.414337 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgbj\" (UniqueName: \"kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj\") pod \"auto-csr-approver-29535270-cjtxt\" (UID: \"0a3bae85-a2b7-4dc7-9ef4-5001ea024453\") " pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.414890 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.414998 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.415064 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.415007 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.415156 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.415209 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-qg95d" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.450726 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.454575 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq"] Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.463773 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.480291 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/5be7c3b0-feda-4dfd-963c-17813fdc8651-manager-config\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.480345 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-webhook-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.480373 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.480404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnj8j\" (UniqueName: \"kubernetes.io/projected/5be7c3b0-feda-4dfd-963c-17813fdc8651-kube-api-access-mnj8j\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.480444 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-apiservice-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.581614 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-apiservice-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.582302 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/5be7c3b0-feda-4dfd-963c-17813fdc8651-manager-config\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.582341 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-webhook-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.582370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.582402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnj8j\" (UniqueName: \"kubernetes.io/projected/5be7c3b0-feda-4dfd-963c-17813fdc8651-kube-api-access-mnj8j\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.583357 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/5be7c3b0-feda-4dfd-963c-17813fdc8651-manager-config\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.588690 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-apiservice-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.593387 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.596703 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5be7c3b0-feda-4dfd-963c-17813fdc8651-webhook-cert\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.613872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnj8j\" (UniqueName: \"kubernetes.io/projected/5be7c3b0-feda-4dfd-963c-17813fdc8651-kube-api-access-mnj8j\") pod \"loki-operator-controller-manager-57cd74799f-hkpdq\" (UID: \"5be7c3b0-feda-4dfd-963c-17813fdc8651\") " pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:00 crc kubenswrapper[4809]: I0226 14:30:00.761193 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.055890 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq"] Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.124979 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4"] Feb 26 14:30:01 crc kubenswrapper[4809]: W0226 14:30:01.128654 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6de658b_2510_4a3c_a895_39e7b760b5e2.slice/crio-772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693 WatchSource:0}: Error finding container 772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693: Status 404 returned error can't find the container with id 772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693 Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.138821 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-cjtxt"] Feb 26 14:30:01 crc kubenswrapper[4809]: W0226 14:30:01.147769 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a3bae85_a2b7_4dc7_9ef4_5001ea024453.slice/crio-992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df WatchSource:0}: Error finding container 992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df: Status 404 returned error can't find the container with id 992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.710952 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-rb6br"] Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.711968 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.714156 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-2d7lk" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.714520 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.715269 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.723403 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-rb6br"] Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.761143 4809 generic.go:334] "Generic (PLEG): container finished" podID="d6de658b-2510-4a3c-a895-39e7b760b5e2" containerID="39190b5b0e783a5c00db382463c39f70212215dab307e491ab21e664fcd312f6" exitCode=0 Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.761619 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" event={"ID":"d6de658b-2510-4a3c-a895-39e7b760b5e2","Type":"ContainerDied","Data":"39190b5b0e783a5c00db382463c39f70212215dab307e491ab21e664fcd312f6"} Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.761641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" event={"ID":"d6de658b-2510-4a3c-a895-39e7b760b5e2","Type":"ContainerStarted","Data":"772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693"} Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.762839 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" event={"ID":"0a3bae85-a2b7-4dc7-9ef4-5001ea024453","Type":"ContainerStarted","Data":"992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df"} Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.763755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" event={"ID":"5be7c3b0-feda-4dfd-963c-17813fdc8651","Type":"ContainerStarted","Data":"f54f63b312f5b980c4f2af7b39cef5deed13027c352dd23832dd14111d545f4a"} Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.801290 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8tc6\" (UniqueName: \"kubernetes.io/projected/1926a0b0-d825-4666-af1b-dcf70edde6e5-kube-api-access-g8tc6\") pod \"cluster-logging-operator-c769fd969-rb6br\" (UID: \"1926a0b0-d825-4666-af1b-dcf70edde6e5\") " pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.902777 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8tc6\" (UniqueName: \"kubernetes.io/projected/1926a0b0-d825-4666-af1b-dcf70edde6e5-kube-api-access-g8tc6\") pod \"cluster-logging-operator-c769fd969-rb6br\" (UID: \"1926a0b0-d825-4666-af1b-dcf70edde6e5\") " pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" Feb 26 14:30:01 crc kubenswrapper[4809]: I0226 14:30:01.921754 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8tc6\" (UniqueName: \"kubernetes.io/projected/1926a0b0-d825-4666-af1b-dcf70edde6e5-kube-api-access-g8tc6\") pod \"cluster-logging-operator-c769fd969-rb6br\" (UID: \"1926a0b0-d825-4666-af1b-dcf70edde6e5\") " pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" Feb 26 14:30:02 crc kubenswrapper[4809]: I0226 14:30:02.036971 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" Feb 26 14:30:02 crc kubenswrapper[4809]: I0226 14:30:02.254934 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-rb6br"] Feb 26 14:30:02 crc kubenswrapper[4809]: W0226 14:30:02.257312 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1926a0b0_d825_4666_af1b_dcf70edde6e5.slice/crio-896e3efee679cced29aa709e4d1afaa7cb269238dc24b3c7d08d9dee6dc53d7d WatchSource:0}: Error finding container 896e3efee679cced29aa709e4d1afaa7cb269238dc24b3c7d08d9dee6dc53d7d: Status 404 returned error can't find the container with id 896e3efee679cced29aa709e4d1afaa7cb269238dc24b3c7d08d9dee6dc53d7d Feb 26 14:30:02 crc kubenswrapper[4809]: I0226 14:30:02.777831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" event={"ID":"1926a0b0-d825-4666-af1b-dcf70edde6e5","Type":"ContainerStarted","Data":"896e3efee679cced29aa709e4d1afaa7cb269238dc24b3c7d08d9dee6dc53d7d"} Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.040105 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.121893 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfx2q\" (UniqueName: \"kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q\") pod \"d6de658b-2510-4a3c-a895-39e7b760b5e2\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.122035 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume\") pod \"d6de658b-2510-4a3c-a895-39e7b760b5e2\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.122067 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume\") pod \"d6de658b-2510-4a3c-a895-39e7b760b5e2\" (UID: \"d6de658b-2510-4a3c-a895-39e7b760b5e2\") " Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.122916 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6de658b-2510-4a3c-a895-39e7b760b5e2" (UID: "d6de658b-2510-4a3c-a895-39e7b760b5e2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.132207 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6de658b-2510-4a3c-a895-39e7b760b5e2" (UID: "d6de658b-2510-4a3c-a895-39e7b760b5e2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.132286 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q" (OuterVolumeSpecName: "kube-api-access-rfx2q") pod "d6de658b-2510-4a3c-a895-39e7b760b5e2" (UID: "d6de658b-2510-4a3c-a895-39e7b760b5e2"). InnerVolumeSpecName "kube-api-access-rfx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.223333 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfx2q\" (UniqueName: \"kubernetes.io/projected/d6de658b-2510-4a3c-a895-39e7b760b5e2-kube-api-access-rfx2q\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.223375 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6de658b-2510-4a3c-a895-39e7b760b5e2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.223386 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6de658b-2510-4a3c-a895-39e7b760b5e2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.788995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" event={"ID":"d6de658b-2510-4a3c-a895-39e7b760b5e2","Type":"ContainerDied","Data":"772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693"} Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.789289 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772e4c59ee7b332b2e7b0997f879bf3ebdc0869e35c50c334c2338c20d2e9693" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.789061 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4" Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.791399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" event={"ID":"0a3bae85-a2b7-4dc7-9ef4-5001ea024453","Type":"ContainerStarted","Data":"1211139415e3c1090d3baa74500fae3b31d5855d20fe7c9c1c3336944ddca6c5"} Feb 26 14:30:03 crc kubenswrapper[4809]: I0226 14:30:03.810897 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" podStartSLOduration=1.54336853 podStartE2EDuration="3.810877709s" podCreationTimestamp="2026-02-26 14:30:00 +0000 UTC" firstStartedPulling="2026-02-26 14:30:01.150685406 +0000 UTC m=+979.624005939" lastFinishedPulling="2026-02-26 14:30:03.418194595 +0000 UTC m=+981.891515118" observedRunningTime="2026-02-26 14:30:03.803757568 +0000 UTC m=+982.277078091" watchObservedRunningTime="2026-02-26 14:30:03.810877709 +0000 UTC m=+982.284198242" Feb 26 14:30:04 crc kubenswrapper[4809]: I0226 14:30:04.821377 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a3bae85-a2b7-4dc7-9ef4-5001ea024453" containerID="1211139415e3c1090d3baa74500fae3b31d5855d20fe7c9c1c3336944ddca6c5" exitCode=0 Feb 26 14:30:04 crc kubenswrapper[4809]: I0226 14:30:04.821746 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" event={"ID":"0a3bae85-a2b7-4dc7-9ef4-5001ea024453","Type":"ContainerDied","Data":"1211139415e3c1090d3baa74500fae3b31d5855d20fe7c9c1c3336944ddca6c5"} Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.039244 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.192953 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgbj\" (UniqueName: \"kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj\") pod \"0a3bae85-a2b7-4dc7-9ef4-5001ea024453\" (UID: \"0a3bae85-a2b7-4dc7-9ef4-5001ea024453\") " Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.200086 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj" (OuterVolumeSpecName: "kube-api-access-8fgbj") pod "0a3bae85-a2b7-4dc7-9ef4-5001ea024453" (UID: "0a3bae85-a2b7-4dc7-9ef4-5001ea024453"). InnerVolumeSpecName "kube-api-access-8fgbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.294915 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgbj\" (UniqueName: \"kubernetes.io/projected/0a3bae85-a2b7-4dc7-9ef4-5001ea024453-kube-api-access-8fgbj\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.845769 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.845767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535270-cjtxt" event={"ID":"0a3bae85-a2b7-4dc7-9ef4-5001ea024453","Type":"ContainerDied","Data":"992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df"} Feb 26 14:30:07 crc kubenswrapper[4809]: I0226 14:30:07.846604 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992fd227a5c4b329ebfc85cf70a1ccdc703e5b27827d3433687fb50377ddc2df" Feb 26 14:30:08 crc kubenswrapper[4809]: I0226 14:30:08.104081 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-h9shg"] Feb 26 14:30:08 crc kubenswrapper[4809]: I0226 14:30:08.109487 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535264-h9shg"] Feb 26 14:30:08 crc kubenswrapper[4809]: I0226 14:30:08.268071 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868c3491-feae-4e59-bd9f-60b5ea306458" path="/var/lib/kubelet/pods/868c3491-feae-4e59-bd9f-60b5ea306458/volumes" Feb 26 14:30:10 crc kubenswrapper[4809]: I0226 14:30:10.873094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" event={"ID":"5be7c3b0-feda-4dfd-963c-17813fdc8651","Type":"ContainerStarted","Data":"19368411354faeebf3ba3d9b347a1030faa62c8a6b3105e6c5260da7c32a2492"} Feb 26 14:30:10 crc kubenswrapper[4809]: I0226 14:30:10.875657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" event={"ID":"1926a0b0-d825-4666-af1b-dcf70edde6e5","Type":"ContainerStarted","Data":"04731a208eb6072713f2b42b032eee5ed2b636fd56349ae4623b4b4b0b58dff3"} Feb 26 14:30:17 crc kubenswrapper[4809]: I0226 14:30:17.939456 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" event={"ID":"5be7c3b0-feda-4dfd-963c-17813fdc8651","Type":"ContainerStarted","Data":"46c5d8fee7d5277e43ef181ea0bac90cb5729fb51ed26701bddc7e004ac9ab17"} Feb 26 14:30:17 crc kubenswrapper[4809]: I0226 14:30:17.939987 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:17 crc kubenswrapper[4809]: I0226 14:30:17.944484 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 14:30:17 crc kubenswrapper[4809]: I0226 14:30:17.984691 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-rb6br" podStartSLOduration=9.252773305 podStartE2EDuration="16.984673518s" podCreationTimestamp="2026-02-26 14:30:01 +0000 UTC" firstStartedPulling="2026-02-26 14:30:02.260352925 +0000 UTC m=+980.733673448" lastFinishedPulling="2026-02-26 14:30:09.992253128 +0000 UTC m=+988.465573661" observedRunningTime="2026-02-26 14:30:10.91534198 +0000 UTC m=+989.388662493" watchObservedRunningTime="2026-02-26 14:30:17.984673518 +0000 UTC m=+996.457994051" Feb 26 14:30:17 crc kubenswrapper[4809]: I0226 14:30:17.986348 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" podStartSLOduration=2.304633127 podStartE2EDuration="17.986341415s" podCreationTimestamp="2026-02-26 14:30:00 +0000 UTC" firstStartedPulling="2026-02-26 14:30:01.061541846 +0000 UTC m=+979.534862369" lastFinishedPulling="2026-02-26 14:30:16.743250134 +0000 UTC m=+995.216570657" observedRunningTime="2026-02-26 14:30:17.978694439 +0000 UTC m=+996.452014992" watchObservedRunningTime="2026-02-26 14:30:17.986341415 +0000 UTC m=+996.459661948" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.090828 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 26 14:30:22 crc kubenswrapper[4809]: E0226 14:30:22.091621 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3bae85-a2b7-4dc7-9ef4-5001ea024453" containerName="oc" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.091638 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3bae85-a2b7-4dc7-9ef4-5001ea024453" containerName="oc" Feb 26 14:30:22 crc kubenswrapper[4809]: E0226 14:30:22.091653 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6de658b-2510-4a3c-a895-39e7b760b5e2" containerName="collect-profiles" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.091660 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6de658b-2510-4a3c-a895-39e7b760b5e2" containerName="collect-profiles" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.091771 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3bae85-a2b7-4dc7-9ef4-5001ea024453" containerName="oc" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.091791 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6de658b-2510-4a3c-a895-39e7b760b5e2" containerName="collect-profiles" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.092261 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.095233 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.095263 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.106283 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.217332 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-527c82c0-f428-4c2a-9827-af517786b86c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-527c82c0-f428-4c2a-9827-af517786b86c\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.217389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pp5q\" (UniqueName: \"kubernetes.io/projected/eded29e9-1695-450e-bd91-a24cb4bffa5d-kube-api-access-2pp5q\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.318891 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-527c82c0-f428-4c2a-9827-af517786b86c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-527c82c0-f428-4c2a-9827-af517786b86c\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.319292 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pp5q\" (UniqueName: \"kubernetes.io/projected/eded29e9-1695-450e-bd91-a24cb4bffa5d-kube-api-access-2pp5q\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.322376 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.322412 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-527c82c0-f428-4c2a-9827-af517786b86c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-527c82c0-f428-4c2a-9827-af517786b86c\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7670a989faca5c186f00f8531e3c78ddef07e0b9fea22985bf316ccfdd65b920/globalmount\"" pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.339505 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pp5q\" (UniqueName: \"kubernetes.io/projected/eded29e9-1695-450e-bd91-a24cb4bffa5d-kube-api-access-2pp5q\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.344769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-527c82c0-f428-4c2a-9827-af517786b86c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-527c82c0-f428-4c2a-9827-af517786b86c\") pod \"minio\" (UID: \"eded29e9-1695-450e-bd91-a24cb4bffa5d\") " pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.408970 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.635800 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 14:30:22 crc kubenswrapper[4809]: I0226 14:30:22.984668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"eded29e9-1695-450e-bd91-a24cb4bffa5d","Type":"ContainerStarted","Data":"6c36de355cb716adb80e12468fb1740c4ad9167447bdfc18af82f32bfeb9d302"} Feb 26 14:30:30 crc kubenswrapper[4809]: I0226 14:30:30.030047 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"eded29e9-1695-450e-bd91-a24cb4bffa5d","Type":"ContainerStarted","Data":"e993b1046faa4de8f6e973559fd921e65e3b4ef68faf06dec9f635ee225b0344"} Feb 26 14:30:30 crc kubenswrapper[4809]: I0226 14:30:30.046433 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.584926263 podStartE2EDuration="11.046411394s" podCreationTimestamp="2026-02-26 14:30:19 +0000 UTC" firstStartedPulling="2026-02-26 14:30:22.645550312 +0000 UTC m=+1001.118870835" lastFinishedPulling="2026-02-26 14:30:29.107035443 +0000 UTC m=+1007.580355966" observedRunningTime="2026-02-26 14:30:30.042518804 +0000 UTC m=+1008.515839327" watchObservedRunningTime="2026-02-26 14:30:30.046411394 +0000 UTC m=+1008.519731927" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.160203 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.162930 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.176173 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.280420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.280534 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjs4\" (UniqueName: \"kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.280606 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.382525 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.382882 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjs4\" (UniqueName: \"kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.383055 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.383105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.383590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.411356 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjs4\" (UniqueName: \"kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4\") pod \"community-operators-5sjjl\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.486952 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:33 crc kubenswrapper[4809]: I0226 14:30:33.789062 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:34 crc kubenswrapper[4809]: I0226 14:30:34.055974 4809 generic.go:334] "Generic (PLEG): container finished" podID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerID="de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333" exitCode=0 Feb 26 14:30:34 crc kubenswrapper[4809]: I0226 14:30:34.056040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerDied","Data":"de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333"} Feb 26 14:30:34 crc kubenswrapper[4809]: I0226 14:30:34.056289 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerStarted","Data":"75d273ac82b4d278876b8d5961655246cfba6420c420e767114f0932cd957247"} Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.065834 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerStarted","Data":"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1"} Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.882266 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8"] Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.888343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.891845 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.892135 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.892257 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-cjkzf" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.892357 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.892453 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 26 14:30:35 crc kubenswrapper[4809]: I0226 14:30:35.902261 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.018710 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.018801 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.019125 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9dx\" (UniqueName: \"kubernetes.io/projected/6dde47f1-266b-4f13-978b-26ff224139e9-kube-api-access-zt9dx\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.019230 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.019355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-config\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.067958 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.068979 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.070730 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.071810 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.075045 4809 generic.go:334] "Generic (PLEG): container finished" podID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerID="422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1" exitCode=0 Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.075091 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerDied","Data":"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1"} Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.080220 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.090645 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.120803 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-config\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.120868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.120931 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.120990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9dx\" (UniqueName: \"kubernetes.io/projected/6dde47f1-266b-4f13-978b-26ff224139e9-kube-api-access-zt9dx\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.121041 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.123429 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-config\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.126816 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.127888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.136911 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6dde47f1-266b-4f13-978b-26ff224139e9-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.142770 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.143616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.149894 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9dx\" (UniqueName: \"kubernetes.io/projected/6dde47f1-266b-4f13-978b-26ff224139e9-kube-api-access-zt9dx\") pod \"logging-loki-distributor-5d5548c9f5-lllg8\" (UID: \"6dde47f1-266b-4f13-978b-26ff224139e9\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.155323 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.155511 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.188611 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.212920 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222202 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prs5q\" (UniqueName: \"kubernetes.io/projected/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-kube-api-access-prs5q\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222260 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222319 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-config\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222443 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrt7\" (UniqueName: \"kubernetes.io/projected/d1f96f50-c096-4107-9fe1-351bb6b20d57-kube-api-access-pvrt7\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222473 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222503 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-config\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222524 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.222591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.281262 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-ctm8g"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.282470 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-ctm8g"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.282564 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.285905 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.286159 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.286307 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.286436 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.286634 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.288308 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-znjxl"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.289350 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.290630 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-c5677" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324351 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324419 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-config\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324521 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324564 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prs5q\" (UniqueName: \"kubernetes.io/projected/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-kube-api-access-prs5q\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324592 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324657 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-config\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324750 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.324813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrt7\" (UniqueName: \"kubernetes.io/projected/d1f96f50-c096-4107-9fe1-351bb6b20d57-kube-api-access-pvrt7\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.326242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-config\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.326537 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.327513 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.327551 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-znjxl"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.330325 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-config\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.333075 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.335445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.337398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.341554 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.342129 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/d1f96f50-c096-4107-9fe1-351bb6b20d57-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.352104 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrt7\" (UniqueName: \"kubernetes.io/projected/d1f96f50-c096-4107-9fe1-351bb6b20d57-kube-api-access-pvrt7\") pod \"logging-loki-querier-76bf7b6d45-d8m5w\" (UID: \"d1f96f50-c096-4107-9fe1-351bb6b20d57\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.372580 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prs5q\" (UniqueName: \"kubernetes.io/projected/9a7bcc4d-3a79-4727-bf5e-e96d028fa950-kube-api-access-prs5q\") pod \"logging-loki-query-frontend-6d6859c548-nv5fd\" (UID: \"9a7bcc4d-3a79-4727-bf5e-e96d028fa950\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.389448 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426114 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426316 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426345 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbk9\" (UniqueName: \"kubernetes.io/projected/b1dab503-8599-4066-85b7-86c389ed7748-kube-api-access-gwbk9\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426370 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-rbac\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426410 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426428 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7js\" (UniqueName: \"kubernetes.io/projected/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-kube-api-access-bg7js\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426445 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-rbac\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426468 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426510 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tenants\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426527 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426544 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.426636 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tenants\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.499549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.529879 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbk9\" (UniqueName: \"kubernetes.io/projected/b1dab503-8599-4066-85b7-86c389ed7748-kube-api-access-gwbk9\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.529937 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.529972 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-rbac\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.529996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7js\" (UniqueName: \"kubernetes.io/projected/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-kube-api-access-bg7js\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-rbac\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530100 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530129 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530163 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tenants\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530185 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530239 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530307 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tenants\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.530386 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.535532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.538145 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: E0226 14:30:36.538238 4809 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 26 14:30:36 crc kubenswrapper[4809]: E0226 14:30:36.538315 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret podName:b1dab503-8599-4066-85b7-86c389ed7748 nodeName:}" failed. No retries permitted until 2026-02-26 14:30:37.038295554 +0000 UTC m=+1015.511616087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret") pod "logging-loki-gateway-568bb59667-ctm8g" (UID: "b1dab503-8599-4066-85b7-86c389ed7748") : secret "logging-loki-gateway-http" not found Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.538610 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.539389 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-rbac\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.539815 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: E0226 14:30:36.539922 4809 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.539936 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-rbac\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: E0226 14:30:36.539963 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret podName:1fc6d9b6-52bd-409c-afa9-693fbe42fb7c nodeName:}" failed. No retries permitted until 2026-02-26 14:30:37.039948991 +0000 UTC m=+1015.513269544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret") pod "logging-loki-gateway-568bb59667-znjxl" (UID: "1fc6d9b6-52bd-409c-afa9-693fbe42fb7c") : secret "logging-loki-gateway-http" not found Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.541831 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1dab503-8599-4066-85b7-86c389ed7748-lokistack-gateway\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.542781 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tenants\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.543699 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.552966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.555435 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.557539 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tenants\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.568332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbk9\" (UniqueName: \"kubernetes.io/projected/b1dab503-8599-4066-85b7-86c389ed7748-kube-api-access-gwbk9\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.598104 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7js\" (UniqueName: \"kubernetes.io/projected/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-kube-api-access-bg7js\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.748600 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w"] Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.902365 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8"] Feb 26 14:30:36 crc kubenswrapper[4809]: W0226 14:30:36.978322 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a7bcc4d_3a79_4727_bf5e_e96d028fa950.slice/crio-01be3e9988569ade900638049a433089c6e0a4f116f8d5e9c6e0799de570d2de WatchSource:0}: Error finding container 01be3e9988569ade900638049a433089c6e0a4f116f8d5e9c6e0799de570d2de: Status 404 returned error can't find the container with id 01be3e9988569ade900638049a433089c6e0a4f116f8d5e9c6e0799de570d2de Feb 26 14:30:36 crc kubenswrapper[4809]: I0226 14:30:36.983022 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.042711 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.042855 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.051191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1dab503-8599-4066-85b7-86c389ed7748-tls-secret\") pod \"logging-loki-gateway-568bb59667-ctm8g\" (UID: \"b1dab503-8599-4066-85b7-86c389ed7748\") " pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.056693 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1fc6d9b6-52bd-409c-afa9-693fbe42fb7c-tls-secret\") pod \"logging-loki-gateway-568bb59667-znjxl\" (UID: \"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c\") " pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.063285 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.067451 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.074751 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.075115 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.088400 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.099816 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" event={"ID":"6dde47f1-266b-4f13-978b-26ff224139e9","Type":"ContainerStarted","Data":"7300ddafd000460ca26053bb0c817f2c8058271a1fdb59edf1adf7451ad93e26"} Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.103973 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" event={"ID":"9a7bcc4d-3a79-4727-bf5e-e96d028fa950","Type":"ContainerStarted","Data":"01be3e9988569ade900638049a433089c6e0a4f116f8d5e9c6e0799de570d2de"} Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.107622 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" event={"ID":"d1f96f50-c096-4107-9fe1-351bb6b20d57","Type":"ContainerStarted","Data":"819e4d007ae8cde1ed10696410bd3a3cf25f78301f7ab517359c16ec68104d72"} Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.117176 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerStarted","Data":"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf"} Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.124306 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.125346 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.136706 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.142820 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.142921 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143791 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72685\" (UniqueName: \"kubernetes.io/projected/05cda7c6-2dff-46e8-9622-6dda35865e97-kube-api-access-72685\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143817 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ed09cde-a991-420a-a4db-81797b30c03d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed09cde-a991-420a-a4db-81797b30c03d\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.143994 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-config\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.144208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.144245 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.144655 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5sjjl" podStartSLOduration=1.46365986 podStartE2EDuration="4.14464431s" podCreationTimestamp="2026-02-26 14:30:33 +0000 UTC" firstStartedPulling="2026-02-26 14:30:34.057321 +0000 UTC m=+1012.530641523" lastFinishedPulling="2026-02-26 14:30:36.73830545 +0000 UTC m=+1015.211625973" observedRunningTime="2026-02-26 14:30:37.142832259 +0000 UTC m=+1015.616152782" watchObservedRunningTime="2026-02-26 14:30:37.14464431 +0000 UTC m=+1015.617964833" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.202546 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.204891 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.207420 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.207595 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.219073 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.221305 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.245924 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246274 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ffdd6d64-3323-46fd-b882-04be82820142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffdd6d64-3323-46fd-b882-04be82820142\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246362 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9w7\" (UniqueName: \"kubernetes.io/projected/7d913002-7509-40a2-9de5-3efb1c774a56-kube-api-access-mh9w7\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246401 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246433 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246466 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246491 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ed09cde-a991-420a-a4db-81797b30c03d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed09cde-a991-420a-a4db-81797b30c03d\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246512 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gcm\" (UniqueName: \"kubernetes.io/projected/19265028-6636-400d-9803-4b7cbcf14758-kube-api-access-j5gcm\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246570 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246604 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-config\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246642 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246721 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.246738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72685\" (UniqueName: \"kubernetes.io/projected/05cda7c6-2dff-46e8-9622-6dda35865e97-kube-api-access-72685\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.248464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-config\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.250755 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-config\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.250806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.251692 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.251720 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ed09cde-a991-420a-a4db-81797b30c03d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed09cde-a991-420a-a4db-81797b30c03d\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb37713cd4dc3352e7d969691ccb66efdcdf44c2a905c8d23f36d73e2be2fac6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.252516 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.252544 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/028c54257e51f53fc6e1268bab3f40dfafe00681bd50429013c50a007ce2362c/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.264790 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.264874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.264892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/05cda7c6-2dff-46e8-9622-6dda35865e97-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.267764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72685\" (UniqueName: \"kubernetes.io/projected/05cda7c6-2dff-46e8-9622-6dda35865e97-kube-api-access-72685\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.272843 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.287231 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b6df600-5e2b-4e1a-b1b6-1bae030c2348\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.290937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ed09cde-a991-420a-a4db-81797b30c03d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed09cde-a991-420a-a4db-81797b30c03d\") pod \"logging-loki-ingester-0\" (UID: \"05cda7c6-2dff-46e8-9622-6dda35865e97\") " pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.349999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350333 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350367 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350407 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350437 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-config\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350502 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ffdd6d64-3323-46fd-b882-04be82820142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffdd6d64-3323-46fd-b882-04be82820142\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9w7\" (UniqueName: \"kubernetes.io/projected/7d913002-7509-40a2-9de5-3efb1c774a56-kube-api-access-mh9w7\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350603 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350623 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gcm\" (UniqueName: \"kubernetes.io/projected/19265028-6636-400d-9803-4b7cbcf14758-kube-api-access-j5gcm\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350644 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350684 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.350710 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.352989 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-config\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.353650 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.354586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.354881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.355280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.354591 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d913002-7509-40a2-9de5-3efb1c774a56-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.358578 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.358582 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.358622 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ffdd6d64-3323-46fd-b882-04be82820142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffdd6d64-3323-46fd-b882-04be82820142\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c5eb749d13c6ae14b1ed3e75ad154858c817959311b08f8e95a3e9383bc3ab9/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.358729 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.358774 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ca4e6c19117f537b4b5282746be966f237a8181529e2091409bae0289ae278a/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.359367 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.362934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d913002-7509-40a2-9de5-3efb1c774a56-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.385528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9w7\" (UniqueName: \"kubernetes.io/projected/7d913002-7509-40a2-9de5-3efb1c774a56-kube-api-access-mh9w7\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.387777 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/19265028-6636-400d-9803-4b7cbcf14758-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.399666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.411680 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gcm\" (UniqueName: \"kubernetes.io/projected/19265028-6636-400d-9803-4b7cbcf14758-kube-api-access-j5gcm\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.427187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-de69c665-b66b-48ba-922b-fa653a86cd6a\") pod \"logging-loki-compactor-0\" (UID: \"19265028-6636-400d-9803-4b7cbcf14758\") " pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.435854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ffdd6d64-3323-46fd-b882-04be82820142\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffdd6d64-3323-46fd-b882-04be82820142\") pod \"logging-loki-index-gateway-0\" (UID: \"7d913002-7509-40a2-9de5-3efb1c774a56\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.479298 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.533380 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.588161 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-znjxl"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.699429 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-568bb59667-ctm8g"] Feb 26 14:30:37 crc kubenswrapper[4809]: I0226 14:30:37.907498 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:37.999447 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 26 14:30:38 crc kubenswrapper[4809]: W0226 14:30:38.004296 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d913002_7509_40a2_9de5_3efb1c774a56.slice/crio-df8997d01ad4e4eceb76167e85205262af6b5ce90a1780381b79a9bde1ebc338 WatchSource:0}: Error finding container df8997d01ad4e4eceb76167e85205262af6b5ce90a1780381b79a9bde1ebc338: Status 404 returned error can't find the container with id df8997d01ad4e4eceb76167e85205262af6b5ce90a1780381b79a9bde1ebc338 Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.049464 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 26 14:30:38 crc kubenswrapper[4809]: W0226 14:30:38.065529 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19265028_6636_400d_9803_4b7cbcf14758.slice/crio-ed6cd2da1961553422237a1f9f4a3b3d3158c5921d8d2355f5bb46f21357145f WatchSource:0}: Error finding container ed6cd2da1961553422237a1f9f4a3b3d3158c5921d8d2355f5bb46f21357145f: Status 404 returned error can't find the container with id ed6cd2da1961553422237a1f9f4a3b3d3158c5921d8d2355f5bb46f21357145f Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.139742 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"05cda7c6-2dff-46e8-9622-6dda35865e97","Type":"ContainerStarted","Data":"0b10ef437b2313be822751bb01d581d4aa6ced5bd63ff67668eeab797a8d434d"} Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.146181 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.148073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.151507 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"19265028-6636-400d-9803-4b7cbcf14758","Type":"ContainerStarted","Data":"ed6cd2da1961553422237a1f9f4a3b3d3158c5921d8d2355f5bb46f21357145f"} Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.156210 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" event={"ID":"b1dab503-8599-4066-85b7-86c389ed7748","Type":"ContainerStarted","Data":"b40e4c1018835febde96308b81080bcacfdee3c28c92988937e2fefe2575c9eb"} Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.162233 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" event={"ID":"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c","Type":"ContainerStarted","Data":"230fb84dddf9380fb4a91e95a5511a172ae2eba1f7e11f118520b3b39879b173"} Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.163716 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.164187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d913002-7509-40a2-9de5-3efb1c774a56","Type":"ContainerStarted","Data":"df8997d01ad4e4eceb76167e85205262af6b5ce90a1780381b79a9bde1ebc338"} Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.269874 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.269938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.270037 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlwtg\" (UniqueName: \"kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.371908 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.372090 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.372257 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlwtg\" (UniqueName: \"kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.372390 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.374403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.394434 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlwtg\" (UniqueName: \"kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg\") pod \"redhat-marketplace-vdx5n\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:38 crc kubenswrapper[4809]: I0226 14:30:38.500325 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:39 crc kubenswrapper[4809]: I0226 14:30:39.034335 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:39 crc kubenswrapper[4809]: I0226 14:30:39.177679 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerStarted","Data":"bdb94adc2d7894b8102794db6279a22bab53c886232b754567f40528b7d1433f"} Feb 26 14:30:40 crc kubenswrapper[4809]: I0226 14:30:40.189933 4809 generic.go:334] "Generic (PLEG): container finished" podID="c424327c-1291-45a6-8208-c29b283df0e9" containerID="cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421" exitCode=0 Feb 26 14:30:40 crc kubenswrapper[4809]: I0226 14:30:40.190110 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerDied","Data":"cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421"} Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.487998 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.488727 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.542503 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.743783 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.745426 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.755292 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.878134 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swl7c\" (UniqueName: \"kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.878720 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.878876 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.980724 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.980862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swl7c\" (UniqueName: \"kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.980890 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.981334 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:43 crc kubenswrapper[4809]: I0226 14:30:43.981377 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.000939 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swl7c\" (UniqueName: \"kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c\") pod \"certified-operators-kwq4z\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.062816 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.234860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"05cda7c6-2dff-46e8-9622-6dda35865e97","Type":"ContainerStarted","Data":"e3f74aaf4d45eb745dbb75d73286f6a21b89e382f7c5dc4f023634bfbf598858"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.235936 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.247718 4809 generic.go:334] "Generic (PLEG): container finished" podID="c424327c-1291-45a6-8208-c29b283df0e9" containerID="ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8" exitCode=0 Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.247819 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerDied","Data":"ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.251715 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" event={"ID":"6dde47f1-266b-4f13-978b-26ff224139e9","Type":"ContainerStarted","Data":"99b931ad186d6cfbce7d8d85b477c5291e34e2b0f924cf407a3858ac53ff5370"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.251848 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.270685 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=4.08056402 podStartE2EDuration="9.27065941s" podCreationTimestamp="2026-02-26 14:30:35 +0000 UTC" firstStartedPulling="2026-02-26 14:30:37.911836673 +0000 UTC m=+1016.385157196" lastFinishedPulling="2026-02-26 14:30:43.101932053 +0000 UTC m=+1021.575252586" observedRunningTime="2026-02-26 14:30:44.260718149 +0000 UTC m=+1022.734038672" watchObservedRunningTime="2026-02-26 14:30:44.27065941 +0000 UTC m=+1022.743979933" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.291132 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"19265028-6636-400d-9803-4b7cbcf14758","Type":"ContainerStarted","Data":"9695cca409d73e5be4147b339b0e8b1f96c1441c00c21ec8620dbf6cd37f9e2c"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.291177 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" event={"ID":"b1dab503-8599-4066-85b7-86c389ed7748","Type":"ContainerStarted","Data":"33236710b5e582a61e74ac82f0330ce0a8c713fde3a5cd9720c938971d85bf59"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.291196 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.303655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" event={"ID":"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c","Type":"ContainerStarted","Data":"a491e2b6033f73dc8e52f21f5d826c6fe363b72032c09cfb365a4af34c53f40e"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.313107 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" event={"ID":"d1f96f50-c096-4107-9fe1-351bb6b20d57","Type":"ContainerStarted","Data":"3fc1385f6afe7a6e69c576588fdfe91239b79c8c4f509fe3420d00416e7bc61a"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.313154 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.317445 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" podStartSLOduration=3.143964807 podStartE2EDuration="9.317430483s" podCreationTimestamp="2026-02-26 14:30:35 +0000 UTC" firstStartedPulling="2026-02-26 14:30:36.940988051 +0000 UTC m=+1015.414308574" lastFinishedPulling="2026-02-26 14:30:43.114453727 +0000 UTC m=+1021.587774250" observedRunningTime="2026-02-26 14:30:44.313994016 +0000 UTC m=+1022.787314549" watchObservedRunningTime="2026-02-26 14:30:44.317430483 +0000 UTC m=+1022.790751006" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.328152 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" event={"ID":"9a7bcc4d-3a79-4727-bf5e-e96d028fa950","Type":"ContainerStarted","Data":"373ca23e1ab7b4e777156b88d7215b883e885180774df5a0a8c8a3ae50905bd5"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.328599 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.335110 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d913002-7509-40a2-9de5-3efb1c774a56","Type":"ContainerStarted","Data":"0978b84e73c26c6d38d665109407b6ad8ef3e14d7ef0003177c546a9ddac7d10"} Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.351126 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.3638068199999998 podStartE2EDuration="8.351101085s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:38.068125113 +0000 UTC m=+1016.541445636" lastFinishedPulling="2026-02-26 14:30:43.055419378 +0000 UTC m=+1021.528739901" observedRunningTime="2026-02-26 14:30:44.342883382 +0000 UTC m=+1022.816203905" watchObservedRunningTime="2026-02-26 14:30:44.351101085 +0000 UTC m=+1022.824421608" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.369400 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.26125459 podStartE2EDuration="8.369377282s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:38.006996784 +0000 UTC m=+1016.480317307" lastFinishedPulling="2026-02-26 14:30:43.115119476 +0000 UTC m=+1021.588439999" observedRunningTime="2026-02-26 14:30:44.364608987 +0000 UTC m=+1022.837929510" watchObservedRunningTime="2026-02-26 14:30:44.369377282 +0000 UTC m=+1022.842697805" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.414448 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.448776 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" podStartSLOduration=2.328278888 podStartE2EDuration="8.448757986s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:36.995230925 +0000 UTC m=+1015.468551448" lastFinishedPulling="2026-02-26 14:30:43.115710023 +0000 UTC m=+1021.589030546" observedRunningTime="2026-02-26 14:30:44.446556304 +0000 UTC m=+1022.919876827" watchObservedRunningTime="2026-02-26 14:30:44.448757986 +0000 UTC m=+1022.922078509" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.450248 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" podStartSLOduration=2.100407445 podStartE2EDuration="8.450242518s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:36.765280383 +0000 UTC m=+1015.238600916" lastFinishedPulling="2026-02-26 14:30:43.115115466 +0000 UTC m=+1021.588435989" observedRunningTime="2026-02-26 14:30:44.402385105 +0000 UTC m=+1022.875705638" watchObservedRunningTime="2026-02-26 14:30:44.450242518 +0000 UTC m=+1022.923563041" Feb 26 14:30:44 crc kubenswrapper[4809]: I0226 14:30:44.611283 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.351679 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerStarted","Data":"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687"} Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.354344 4809 generic.go:334] "Generic (PLEG): container finished" podID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerID="32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01" exitCode=0 Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.354415 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerDied","Data":"32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01"} Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.354444 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerStarted","Data":"57fe535a51cacef8a198d374c4adf29e488b1800ad5aeb145becc29a071bb21c"} Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.355176 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.372151 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdx5n" podStartSLOduration=4.257639979 podStartE2EDuration="7.372137037s" podCreationTimestamp="2026-02-26 14:30:38 +0000 UTC" firstStartedPulling="2026-02-26 14:30:41.626640446 +0000 UTC m=+1020.099960979" lastFinishedPulling="2026-02-26 14:30:44.741137514 +0000 UTC m=+1023.214458037" observedRunningTime="2026-02-26 14:30:45.371349494 +0000 UTC m=+1023.844670037" watchObservedRunningTime="2026-02-26 14:30:45.372137037 +0000 UTC m=+1023.845457560" Feb 26 14:30:45 crc kubenswrapper[4809]: I0226 14:30:45.938336 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.363489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" event={"ID":"b1dab503-8599-4066-85b7-86c389ed7748","Type":"ContainerStarted","Data":"7855f0bca6b9a48b2e521f33696817ac0cd51ccbf9c6793e718b568c5601ec49"} Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.363840 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.363856 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.366206 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" event={"ID":"1fc6d9b6-52bd-409c-afa9-693fbe42fb7c","Type":"ContainerStarted","Data":"a4f953ef5e85083ac5f1c8427f3cd3646ac65d4930113c5de86e4ea1fe7c50e5"} Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.366619 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5sjjl" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="registry-server" containerID="cri-o://a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf" gracePeriod=2 Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.372881 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.373323 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.385047 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podStartSLOduration=2.173654358 podStartE2EDuration="10.385004608s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:37.710508501 +0000 UTC m=+1016.183829024" lastFinishedPulling="2026-02-26 14:30:45.921858751 +0000 UTC m=+1024.395179274" observedRunningTime="2026-02-26 14:30:46.382917159 +0000 UTC m=+1024.856237692" watchObservedRunningTime="2026-02-26 14:30:46.385004608 +0000 UTC m=+1024.858325131" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.457298 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podStartSLOduration=2.201145424 podStartE2EDuration="10.457272521s" podCreationTimestamp="2026-02-26 14:30:36 +0000 UTC" firstStartedPulling="2026-02-26 14:30:37.642107236 +0000 UTC m=+1016.115427749" lastFinishedPulling="2026-02-26 14:30:45.898234323 +0000 UTC m=+1024.371554846" observedRunningTime="2026-02-26 14:30:46.442950656 +0000 UTC m=+1024.916271199" watchObservedRunningTime="2026-02-26 14:30:46.457272521 +0000 UTC m=+1024.930593044" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.743641 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.850137 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjs4\" (UniqueName: \"kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4\") pod \"a057d12b-97ff-4dd0-a602-c50327bd56f7\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.850219 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities\") pod \"a057d12b-97ff-4dd0-a602-c50327bd56f7\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.850275 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content\") pod \"a057d12b-97ff-4dd0-a602-c50327bd56f7\" (UID: \"a057d12b-97ff-4dd0-a602-c50327bd56f7\") " Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.851469 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities" (OuterVolumeSpecName: "utilities") pod "a057d12b-97ff-4dd0-a602-c50327bd56f7" (UID: "a057d12b-97ff-4dd0-a602-c50327bd56f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.855695 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4" (OuterVolumeSpecName: "kube-api-access-mkjs4") pod "a057d12b-97ff-4dd0-a602-c50327bd56f7" (UID: "a057d12b-97ff-4dd0-a602-c50327bd56f7"). InnerVolumeSpecName "kube-api-access-mkjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.897679 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a057d12b-97ff-4dd0-a602-c50327bd56f7" (UID: "a057d12b-97ff-4dd0-a602-c50327bd56f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.953085 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjs4\" (UniqueName: \"kubernetes.io/projected/a057d12b-97ff-4dd0-a602-c50327bd56f7-kube-api-access-mkjs4\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.953124 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:46 crc kubenswrapper[4809]: I0226 14:30:46.953136 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a057d12b-97ff-4dd0-a602-c50327bd56f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.273931 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.273973 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.283311 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.285690 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.380575 4809 generic.go:334] "Generic (PLEG): container finished" podID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerID="1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6" exitCode=0 Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.380693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerDied","Data":"1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6"} Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.391319 4809 generic.go:334] "Generic (PLEG): container finished" podID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerID="a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf" exitCode=0 Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.392226 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sjjl" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.397166 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerDied","Data":"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf"} Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.397208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sjjl" event={"ID":"a057d12b-97ff-4dd0-a602-c50327bd56f7","Type":"ContainerDied","Data":"75d273ac82b4d278876b8d5961655246cfba6420c420e767114f0932cd957247"} Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.397226 4809 scope.go:117] "RemoveContainer" containerID="a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.425708 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.434548 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5sjjl"] Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.530842 4809 scope.go:117] "RemoveContainer" containerID="422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.551396 4809 scope.go:117] "RemoveContainer" containerID="de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.575692 4809 scope.go:117] "RemoveContainer" containerID="a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf" Feb 26 14:30:47 crc kubenswrapper[4809]: E0226 14:30:47.576160 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf\": container with ID starting with a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf not found: ID does not exist" containerID="a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.576202 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf"} err="failed to get container status \"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf\": rpc error: code = NotFound desc = could not find container \"a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf\": container with ID starting with a79f0a4d95d3175ded729cecbcf9acbeac9c7e42f1881522819c71ea84a81cdf not found: ID does not exist" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.576226 4809 scope.go:117] "RemoveContainer" containerID="422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1" Feb 26 14:30:47 crc kubenswrapper[4809]: E0226 14:30:47.577161 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1\": container with ID starting with 422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1 not found: ID does not exist" containerID="422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.577206 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1"} err="failed to get container status \"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1\": rpc error: code = NotFound desc = could not find container \"422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1\": container with ID starting with 422140c6f534870cc6329176a06773b1e76950cf1bb5cb5f0648b50fc7238fc1 not found: ID does not exist" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.577232 4809 scope.go:117] "RemoveContainer" containerID="de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333" Feb 26 14:30:47 crc kubenswrapper[4809]: E0226 14:30:47.577554 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333\": container with ID starting with de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333 not found: ID does not exist" containerID="de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333" Feb 26 14:30:47 crc kubenswrapper[4809]: I0226 14:30:47.577589 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333"} err="failed to get container status \"de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333\": rpc error: code = NotFound desc = could not find container \"de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333\": container with ID starting with de34fd0afcfb55f637ffc184a58e302de1d3f50284752c83c377fa626b413333 not found: ID does not exist" Feb 26 14:30:48 crc kubenswrapper[4809]: I0226 14:30:48.270415 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" path="/var/lib/kubelet/pods/a057d12b-97ff-4dd0-a602-c50327bd56f7/volumes" Feb 26 14:30:48 crc kubenswrapper[4809]: I0226 14:30:48.501420 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:48 crc kubenswrapper[4809]: I0226 14:30:48.502798 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:48 crc kubenswrapper[4809]: I0226 14:30:48.548747 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:49 crc kubenswrapper[4809]: I0226 14:30:49.449324 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:49 crc kubenswrapper[4809]: I0226 14:30:49.490609 4809 scope.go:117] "RemoveContainer" containerID="cfe4fe28ce9fb920345eda1e92d945d4f23e23b9dc3d87a6d0193e41282004be" Feb 26 14:30:51 crc kubenswrapper[4809]: I0226 14:30:51.134072 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:51 crc kubenswrapper[4809]: I0226 14:30:51.430571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerStarted","Data":"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84"} Feb 26 14:30:51 crc kubenswrapper[4809]: I0226 14:30:51.449122 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwq4z" podStartSLOduration=3.935952147 podStartE2EDuration="8.449107454s" podCreationTimestamp="2026-02-26 14:30:43 +0000 UTC" firstStartedPulling="2026-02-26 14:30:45.741935724 +0000 UTC m=+1024.215256247" lastFinishedPulling="2026-02-26 14:30:50.255091031 +0000 UTC m=+1028.728411554" observedRunningTime="2026-02-26 14:30:51.448731913 +0000 UTC m=+1029.922052436" watchObservedRunningTime="2026-02-26 14:30:51.449107454 +0000 UTC m=+1029.922427977" Feb 26 14:30:52 crc kubenswrapper[4809]: I0226 14:30:52.436715 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vdx5n" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="registry-server" containerID="cri-o://45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687" gracePeriod=2 Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.392790 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445234 4809 generic.go:334] "Generic (PLEG): container finished" podID="c424327c-1291-45a6-8208-c29b283df0e9" containerID="45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687" exitCode=0 Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445273 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerDied","Data":"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687"} Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445299 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdx5n" event={"ID":"c424327c-1291-45a6-8208-c29b283df0e9","Type":"ContainerDied","Data":"bdb94adc2d7894b8102794db6279a22bab53c886232b754567f40528b7d1433f"} Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445315 4809 scope.go:117] "RemoveContainer" containerID="45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445416 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdx5n" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.445991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content\") pod \"c424327c-1291-45a6-8208-c29b283df0e9\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.446048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlwtg\" (UniqueName: \"kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg\") pod \"c424327c-1291-45a6-8208-c29b283df0e9\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.446235 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities\") pod \"c424327c-1291-45a6-8208-c29b283df0e9\" (UID: \"c424327c-1291-45a6-8208-c29b283df0e9\") " Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.447057 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities" (OuterVolumeSpecName: "utilities") pod "c424327c-1291-45a6-8208-c29b283df0e9" (UID: "c424327c-1291-45a6-8208-c29b283df0e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.453871 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg" (OuterVolumeSpecName: "kube-api-access-qlwtg") pod "c424327c-1291-45a6-8208-c29b283df0e9" (UID: "c424327c-1291-45a6-8208-c29b283df0e9"). InnerVolumeSpecName "kube-api-access-qlwtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.478284 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c424327c-1291-45a6-8208-c29b283df0e9" (UID: "c424327c-1291-45a6-8208-c29b283df0e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.484912 4809 scope.go:117] "RemoveContainer" containerID="ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.505288 4809 scope.go:117] "RemoveContainer" containerID="cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.537330 4809 scope.go:117] "RemoveContainer" containerID="45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687" Feb 26 14:30:53 crc kubenswrapper[4809]: E0226 14:30:53.537767 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687\": container with ID starting with 45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687 not found: ID does not exist" containerID="45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.537806 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687"} err="failed to get container status \"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687\": rpc error: code = NotFound desc = could not find container \"45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687\": container with ID starting with 45d769a008928f498fa3b131c753b2f4c9b02e3542d6ab9865e619ad86bfe687 not found: ID does not exist" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.537829 4809 scope.go:117] "RemoveContainer" containerID="ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8" Feb 26 14:30:53 crc kubenswrapper[4809]: E0226 14:30:53.538234 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8\": container with ID starting with ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8 not found: ID does not exist" containerID="ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.538292 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8"} err="failed to get container status \"ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8\": rpc error: code = NotFound desc = could not find container \"ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8\": container with ID starting with ce9bdb2b62d823272610f9ed2bc408f794a3952907f7f451a86176066b8928d8 not found: ID does not exist" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.538325 4809 scope.go:117] "RemoveContainer" containerID="cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421" Feb 26 14:30:53 crc kubenswrapper[4809]: E0226 14:30:53.538854 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421\": container with ID starting with cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421 not found: ID does not exist" containerID="cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.538883 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421"} err="failed to get container status \"cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421\": rpc error: code = NotFound desc = could not find container \"cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421\": container with ID starting with cda40c66c2e2133916f6e227d5be93ee0c2c985cb00fdd960c60bf25d2d6f421 not found: ID does not exist" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.548291 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.548349 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c424327c-1291-45a6-8208-c29b283df0e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.548367 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlwtg\" (UniqueName: \"kubernetes.io/projected/c424327c-1291-45a6-8208-c29b283df0e9-kube-api-access-qlwtg\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.774457 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:53 crc kubenswrapper[4809]: I0226 14:30:53.784956 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdx5n"] Feb 26 14:30:54 crc kubenswrapper[4809]: I0226 14:30:54.063372 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:54 crc kubenswrapper[4809]: I0226 14:30:54.063805 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:54 crc kubenswrapper[4809]: I0226 14:30:54.122587 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:54 crc kubenswrapper[4809]: I0226 14:30:54.266400 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c424327c-1291-45a6-8208-c29b283df0e9" path="/var/lib/kubelet/pods/c424327c-1291-45a6-8208-c29b283df0e9/volumes" Feb 26 14:30:55 crc kubenswrapper[4809]: I0226 14:30:55.503766 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:56 crc kubenswrapper[4809]: I0226 14:30:56.536867 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:30:57 crc kubenswrapper[4809]: I0226 14:30:57.472027 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwq4z" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="registry-server" containerID="cri-o://e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84" gracePeriod=2 Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.362993 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.483149 4809 generic.go:334] "Generic (PLEG): container finished" podID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerID="e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84" exitCode=0 Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.483194 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerDied","Data":"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84"} Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.483222 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwq4z" event={"ID":"72af3cdf-470c-4a9e-b014-9feb6c774e18","Type":"ContainerDied","Data":"57fe535a51cacef8a198d374c4adf29e488b1800ad5aeb145becc29a071bb21c"} Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.483240 4809 scope.go:117] "RemoveContainer" containerID="e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.483260 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwq4z" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.510319 4809 scope.go:117] "RemoveContainer" containerID="1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.518852 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swl7c\" (UniqueName: \"kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c\") pod \"72af3cdf-470c-4a9e-b014-9feb6c774e18\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.518942 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities\") pod \"72af3cdf-470c-4a9e-b014-9feb6c774e18\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.519112 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content\") pod \"72af3cdf-470c-4a9e-b014-9feb6c774e18\" (UID: \"72af3cdf-470c-4a9e-b014-9feb6c774e18\") " Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.521310 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities" (OuterVolumeSpecName: "utilities") pod "72af3cdf-470c-4a9e-b014-9feb6c774e18" (UID: "72af3cdf-470c-4a9e-b014-9feb6c774e18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.524444 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c" (OuterVolumeSpecName: "kube-api-access-swl7c") pod "72af3cdf-470c-4a9e-b014-9feb6c774e18" (UID: "72af3cdf-470c-4a9e-b014-9feb6c774e18"). InnerVolumeSpecName "kube-api-access-swl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.536121 4809 scope.go:117] "RemoveContainer" containerID="32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.583673 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72af3cdf-470c-4a9e-b014-9feb6c774e18" (UID: "72af3cdf-470c-4a9e-b014-9feb6c774e18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.608107 4809 scope.go:117] "RemoveContainer" containerID="e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84" Feb 26 14:30:58 crc kubenswrapper[4809]: E0226 14:30:58.608905 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84\": container with ID starting with e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84 not found: ID does not exist" containerID="e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.608937 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84"} err="failed to get container status \"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84\": rpc error: code = NotFound desc = could not find container \"e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84\": container with ID starting with e7721387c8febcb11ef8cb4dbc3f9c3072b14f2bc12e9afc510849288edaae84 not found: ID does not exist" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.608957 4809 scope.go:117] "RemoveContainer" containerID="1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6" Feb 26 14:30:58 crc kubenswrapper[4809]: E0226 14:30:58.609326 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6\": container with ID starting with 1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6 not found: ID does not exist" containerID="1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.609377 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6"} err="failed to get container status \"1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6\": rpc error: code = NotFound desc = could not find container \"1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6\": container with ID starting with 1bc4eaaa6fcf0487ec2271b3a053680921c33688fab33954ceb014b40df2cde6 not found: ID does not exist" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.609558 4809 scope.go:117] "RemoveContainer" containerID="32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01" Feb 26 14:30:58 crc kubenswrapper[4809]: E0226 14:30:58.611331 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01\": container with ID starting with 32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01 not found: ID does not exist" containerID="32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.611373 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01"} err="failed to get container status \"32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01\": rpc error: code = NotFound desc = could not find container \"32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01\": container with ID starting with 32dacd8532c7a8a6686811b749406580112ba13caf899cbe108810a67a3eba01 not found: ID does not exist" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.621122 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.621170 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swl7c\" (UniqueName: \"kubernetes.io/projected/72af3cdf-470c-4a9e-b014-9feb6c774e18-kube-api-access-swl7c\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.621183 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72af3cdf-470c-4a9e-b014-9feb6c774e18-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.813102 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:30:58 crc kubenswrapper[4809]: I0226 14:30:58.820406 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwq4z"] Feb 26 14:31:00 crc kubenswrapper[4809]: I0226 14:31:00.265811 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" path="/var/lib/kubelet/pods/72af3cdf-470c-4a9e-b014-9feb6c774e18/volumes" Feb 26 14:31:06 crc kubenswrapper[4809]: I0226 14:31:06.220467 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 14:31:06 crc kubenswrapper[4809]: I0226 14:31:06.396847 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 14:31:06 crc kubenswrapper[4809]: I0226 14:31:06.512353 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 14:31:07 crc kubenswrapper[4809]: I0226 14:31:07.410480 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 26 14:31:07 crc kubenswrapper[4809]: I0226 14:31:07.410553 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="05cda7c6-2dff-46e8-9622-6dda35865e97" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:31:07 crc kubenswrapper[4809]: I0226 14:31:07.485030 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 26 14:31:07 crc kubenswrapper[4809]: I0226 14:31:07.544694 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 26 14:31:17 crc kubenswrapper[4809]: I0226 14:31:17.405062 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 26 14:31:17 crc kubenswrapper[4809]: I0226 14:31:17.405691 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="05cda7c6-2dff-46e8-9622-6dda35865e97" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:31:27 crc kubenswrapper[4809]: I0226 14:31:27.415338 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 26 14:31:27 crc kubenswrapper[4809]: I0226 14:31:27.415634 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="05cda7c6-2dff-46e8-9622-6dda35865e97" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:31:37 crc kubenswrapper[4809]: I0226 14:31:37.406587 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 26 14:31:41 crc kubenswrapper[4809]: I0226 14:31:41.794074 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:31:41 crc kubenswrapper[4809]: I0226 14:31:41.795321 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.709362 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-szh9k"] Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710372 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710397 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710415 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710426 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710441 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710453 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710480 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710492 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710517 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710530 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710547 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710557 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710576 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710587 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710608 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710619 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="extract-content" Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.710635 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.710646 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="extract-utilities" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.711234 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="72af3cdf-470c-4a9e-b014-9feb6c774e18" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.711265 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c424327c-1291-45a6-8208-c29b283df0e9" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.711290 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a057d12b-97ff-4dd0-a602-c50327bd56f7" containerName="registry-server" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.713573 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.715297 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.715831 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.716820 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.717209 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-2px57" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.717417 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.726046 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.738867 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-szh9k"] Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757219 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757357 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757438 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757460 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.757538 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.773275 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-szh9k"] Feb 26 14:31:54 crc kubenswrapper[4809]: E0226 14:31:54.773992 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-r6rht metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-szh9k" podUID="741ab1d8-cee8-419e-a161-caad18fc0b61" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859129 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859164 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859189 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859207 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859229 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859260 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.859294 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.860108 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.860367 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.860557 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.860884 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.864372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.868708 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.870559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.878094 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.878669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.879342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht\") pod \"collector-szh9k\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.892211 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.922378 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-szh9k" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960169 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960217 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960305 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960334 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960362 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960385 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960435 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960463 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960484 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.960507 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config\") pod \"741ab1d8-cee8-419e-a161-caad18fc0b61\" (UID: \"741ab1d8-cee8-419e-a161-caad18fc0b61\") " Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.961404 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.961449 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config" (OuterVolumeSpecName: "config") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.961466 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.961481 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir" (OuterVolumeSpecName: "datadir") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.961490 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.964450 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token" (OuterVolumeSpecName: "collector-token") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.964497 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token" (OuterVolumeSpecName: "sa-token") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.965255 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht" (OuterVolumeSpecName: "kube-api-access-r6rht") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "kube-api-access-r6rht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.965840 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.966132 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp" (OuterVolumeSpecName: "tmp") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:31:54 crc kubenswrapper[4809]: I0226 14:31:54.967054 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics" (OuterVolumeSpecName: "metrics") pod "741ab1d8-cee8-419e-a161-caad18fc0b61" (UID: "741ab1d8-cee8-419e-a161-caad18fc0b61"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.061783 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6rht\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-kube-api-access-r6rht\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062112 4809 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/741ab1d8-cee8-419e-a161-caad18fc0b61-datadir\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062198 4809 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062259 4809 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062316 4809 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/741ab1d8-cee8-419e-a161-caad18fc0b61-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062412 4809 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062514 4809 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062605 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062736 4809 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/741ab1d8-cee8-419e-a161-caad18fc0b61-tmp\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062826 4809 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/741ab1d8-cee8-419e-a161-caad18fc0b61-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.062904 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/741ab1d8-cee8-419e-a161-caad18fc0b61-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.898307 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-szh9k" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.966400 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-szh9k"] Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.981391 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-bdmnm"] Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.982805 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bdmnm" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.985917 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-szh9k"] Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.987637 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.987897 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.988229 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-2px57" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.988289 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.988437 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.992838 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-bdmnm"] Feb 26 14:31:55 crc kubenswrapper[4809]: I0226 14:31:55.995932 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.077794 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-syslog-receiver\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krs6b\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-kube-api-access-krs6b\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078109 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-metrics\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078149 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-entrypoint\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078213 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c5a6be3-a564-4d18-a311-854ab5e8804e-tmp\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078333 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c5a6be3-a564-4d18-a311-854ab5e8804e-datadir\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078370 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config-openshift-service-cacrt\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-sa-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-trusted-ca\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.078493 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179554 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c5a6be3-a564-4d18-a311-854ab5e8804e-tmp\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c5a6be3-a564-4d18-a311-854ab5e8804e-datadir\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179622 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config-openshift-service-cacrt\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179641 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-sa-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179662 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-trusted-ca\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179687 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7c5a6be3-a564-4d18-a311-854ab5e8804e-datadir\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.179695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-syslog-receiver\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180432 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config-openshift-service-cacrt\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180575 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krs6b\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-kube-api-access-krs6b\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180662 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-metrics\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180674 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-trusted-ca\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-config\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-entrypoint\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.180752 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.181548 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7c5a6be3-a564-4d18-a311-854ab5e8804e-entrypoint\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.183284 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-metrics\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.184523 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c5a6be3-a564-4d18-a311-854ab5e8804e-tmp\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.185283 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-syslog-receiver\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.185558 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7c5a6be3-a564-4d18-a311-854ab5e8804e-collector-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.196240 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-sa-token\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.200992 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krs6b\" (UniqueName: \"kubernetes.io/projected/7c5a6be3-a564-4d18-a311-854ab5e8804e-kube-api-access-krs6b\") pod \"collector-bdmnm\" (UID: \"7c5a6be3-a564-4d18-a311-854ab5e8804e\") " pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.268621 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741ab1d8-cee8-419e-a161-caad18fc0b61" path="/var/lib/kubelet/pods/741ab1d8-cee8-419e-a161-caad18fc0b61/volumes" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.311536 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bdmnm" Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.764337 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-bdmnm"] Feb 26 14:31:56 crc kubenswrapper[4809]: W0226 14:31:56.777449 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c5a6be3_a564_4d18_a311_854ab5e8804e.slice/crio-2982670183f4f845c4b9e274222805aa622bb183c11195eb7c24f17288d8b4a1 WatchSource:0}: Error finding container 2982670183f4f845c4b9e274222805aa622bb183c11195eb7c24f17288d8b4a1: Status 404 returned error can't find the container with id 2982670183f4f845c4b9e274222805aa622bb183c11195eb7c24f17288d8b4a1 Feb 26 14:31:56 crc kubenswrapper[4809]: I0226 14:31:56.911941 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-bdmnm" event={"ID":"7c5a6be3-a564-4d18-a311-854ab5e8804e","Type":"ContainerStarted","Data":"2982670183f4f845c4b9e274222805aa622bb183c11195eb7c24f17288d8b4a1"} Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.131636 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535272-vwdmk"] Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.132965 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.134908 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.135226 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.136562 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.139779 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-vwdmk"] Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.243421 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sxc\" (UniqueName: \"kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc\") pod \"auto-csr-approver-29535272-vwdmk\" (UID: \"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b\") " pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.345068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sxc\" (UniqueName: \"kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc\") pod \"auto-csr-approver-29535272-vwdmk\" (UID: \"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b\") " pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.364525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sxc\" (UniqueName: \"kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc\") pod \"auto-csr-approver-29535272-vwdmk\" (UID: \"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b\") " pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:00 crc kubenswrapper[4809]: I0226 14:32:00.468362 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:02 crc kubenswrapper[4809]: I0226 14:32:02.589679 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-vwdmk"] Feb 26 14:32:02 crc kubenswrapper[4809]: I0226 14:32:02.965658 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" event={"ID":"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b","Type":"ContainerStarted","Data":"1d55f29f4ec697329be6347d656b50339ca52f5bb89ea3e67cf86bb8a22fb2bb"} Feb 26 14:32:02 crc kubenswrapper[4809]: I0226 14:32:02.969199 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-bdmnm" event={"ID":"7c5a6be3-a564-4d18-a311-854ab5e8804e","Type":"ContainerStarted","Data":"18a23e53ccf475ab0d37800db639ce03f16906b09482cd10ce3c5ad24c73b50d"} Feb 26 14:32:03 crc kubenswrapper[4809]: I0226 14:32:03.977426 4809 generic.go:334] "Generic (PLEG): container finished" podID="cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" containerID="6740deae034c8610fe54d751f7fe65ec4314d3121a3f1779f50ad3043b18e020" exitCode=0 Feb 26 14:32:03 crc kubenswrapper[4809]: I0226 14:32:03.977533 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" event={"ID":"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b","Type":"ContainerDied","Data":"6740deae034c8610fe54d751f7fe65ec4314d3121a3f1779f50ad3043b18e020"} Feb 26 14:32:03 crc kubenswrapper[4809]: I0226 14:32:03.992923 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-bdmnm" podStartSLOduration=3.56957073 podStartE2EDuration="8.992900751s" podCreationTimestamp="2026-02-26 14:31:55 +0000 UTC" firstStartedPulling="2026-02-26 14:31:56.78170334 +0000 UTC m=+1095.255023883" lastFinishedPulling="2026-02-26 14:32:02.205033381 +0000 UTC m=+1100.678353904" observedRunningTime="2026-02-26 14:32:02.99643998 +0000 UTC m=+1101.469760503" watchObservedRunningTime="2026-02-26 14:32:03.992900751 +0000 UTC m=+1102.466221274" Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.300322 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.426147 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sxc\" (UniqueName: \"kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc\") pod \"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b\" (UID: \"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b\") " Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.430866 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc" (OuterVolumeSpecName: "kube-api-access-l2sxc") pod "cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" (UID: "cd5e5bb3-a6a7-4211-bcf4-612414e2f71b"). InnerVolumeSpecName "kube-api-access-l2sxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.527828 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sxc\" (UniqueName: \"kubernetes.io/projected/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b-kube-api-access-l2sxc\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.992724 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" event={"ID":"cd5e5bb3-a6a7-4211-bcf4-612414e2f71b","Type":"ContainerDied","Data":"1d55f29f4ec697329be6347d656b50339ca52f5bb89ea3e67cf86bb8a22fb2bb"} Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.992801 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d55f29f4ec697329be6347d656b50339ca52f5bb89ea3e67cf86bb8a22fb2bb" Feb 26 14:32:05 crc kubenswrapper[4809]: I0226 14:32:05.992753 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535272-vwdmk" Feb 26 14:32:06 crc kubenswrapper[4809]: I0226 14:32:06.362389 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-hwbtt"] Feb 26 14:32:06 crc kubenswrapper[4809]: I0226 14:32:06.368621 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535266-hwbtt"] Feb 26 14:32:08 crc kubenswrapper[4809]: I0226 14:32:08.265451 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321cb915-1491-48c2-95a9-07d25d34d3cd" path="/var/lib/kubelet/pods/321cb915-1491-48c2-95a9-07d25d34d3cd/volumes" Feb 26 14:32:11 crc kubenswrapper[4809]: I0226 14:32:11.793613 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:32:11 crc kubenswrapper[4809]: I0226 14:32:11.793918 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.527236 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29"] Feb 26 14:32:34 crc kubenswrapper[4809]: E0226 14:32:34.529232 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" containerName="oc" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.529343 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" containerName="oc" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.529724 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" containerName="oc" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.531242 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.533577 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.538824 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29"] Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.626928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.627061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.627126 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmng5\" (UniqueName: \"kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.727566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.727911 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.728126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmng5\" (UniqueName: \"kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.728902 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.729092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.753955 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmng5\" (UniqueName: \"kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:34 crc kubenswrapper[4809]: I0226 14:32:34.882348 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:35 crc kubenswrapper[4809]: I0226 14:32:35.378312 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29"] Feb 26 14:32:36 crc kubenswrapper[4809]: I0226 14:32:36.209498 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerID="9e7c60c0c2d34d5f582d5a37238e90e1c59167b83f4a099e57a1b7b6abf801a1" exitCode=0 Feb 26 14:32:36 crc kubenswrapper[4809]: I0226 14:32:36.209577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" event={"ID":"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7","Type":"ContainerDied","Data":"9e7c60c0c2d34d5f582d5a37238e90e1c59167b83f4a099e57a1b7b6abf801a1"} Feb 26 14:32:36 crc kubenswrapper[4809]: I0226 14:32:36.209932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" event={"ID":"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7","Type":"ContainerStarted","Data":"c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a"} Feb 26 14:32:39 crc kubenswrapper[4809]: I0226 14:32:39.234111 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerID="0dee9b1c819d35173d416788950fe18092c752fea3318d9f18bf2be76d58b02e" exitCode=0 Feb 26 14:32:39 crc kubenswrapper[4809]: I0226 14:32:39.234200 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" event={"ID":"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7","Type":"ContainerDied","Data":"0dee9b1c819d35173d416788950fe18092c752fea3318d9f18bf2be76d58b02e"} Feb 26 14:32:40 crc kubenswrapper[4809]: I0226 14:32:40.250910 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerID="506dc61937d247991eb8ce039ab73136b6f987849ce5b27cd3fcf31a1b85b50c" exitCode=0 Feb 26 14:32:40 crc kubenswrapper[4809]: I0226 14:32:40.251408 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" event={"ID":"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7","Type":"ContainerDied","Data":"506dc61937d247991eb8ce039ab73136b6f987849ce5b27cd3fcf31a1b85b50c"} Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.549649 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.631223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle\") pod \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.631290 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util\") pod \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.631316 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmng5\" (UniqueName: \"kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5\") pod \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\" (UID: \"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7\") " Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.631952 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle" (OuterVolumeSpecName: "bundle") pod "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" (UID: "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.644282 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5" (OuterVolumeSpecName: "kube-api-access-gmng5") pod "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" (UID: "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7"). InnerVolumeSpecName "kube-api-access-gmng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.648210 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util" (OuterVolumeSpecName: "util") pod "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" (UID: "a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.732928 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.732979 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmng5\" (UniqueName: \"kubernetes.io/projected/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-kube-api-access-gmng5\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.733001 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.794125 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.794184 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.794230 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.794843 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:32:41 crc kubenswrapper[4809]: I0226 14:32:41.794898 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9" gracePeriod=600 Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.282148 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.282504 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29" event={"ID":"a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7","Type":"ContainerDied","Data":"c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a"} Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.282953 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a" Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.288521 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9" exitCode=0 Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.288566 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9"} Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.288592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168"} Feb 26 14:32:42 crc kubenswrapper[4809]: I0226 14:32:42.288610 4809 scope.go:117] "RemoveContainer" containerID="147e6a042dff58a2efad1fa51f075dc260fda7b361544197fd048835da3ba280" Feb 26 14:32:43 crc kubenswrapper[4809]: E0226 14:32:43.680979 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.249817 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4"] Feb 26 14:32:46 crc kubenswrapper[4809]: E0226 14:32:46.250526 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="extract" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.250542 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="extract" Feb 26 14:32:46 crc kubenswrapper[4809]: E0226 14:32:46.250572 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="pull" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.250579 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="pull" Feb 26 14:32:46 crc kubenswrapper[4809]: E0226 14:32:46.250595 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="util" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.250603 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="util" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.250778 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7" containerName="extract" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.251443 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.259630 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.260073 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.260610 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vwcq8" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.265379 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4"] Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.404218 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxpm8\" (UniqueName: \"kubernetes.io/projected/77f7460a-7462-42ea-8dd6-32340fc3c453-kube-api-access-xxpm8\") pod \"nmstate-operator-75c5dccd6c-r7rl4\" (UID: \"77f7460a-7462-42ea-8dd6-32340fc3c453\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.506117 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxpm8\" (UniqueName: \"kubernetes.io/projected/77f7460a-7462-42ea-8dd6-32340fc3c453-kube-api-access-xxpm8\") pod \"nmstate-operator-75c5dccd6c-r7rl4\" (UID: \"77f7460a-7462-42ea-8dd6-32340fc3c453\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.547204 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxpm8\" (UniqueName: \"kubernetes.io/projected/77f7460a-7462-42ea-8dd6-32340fc3c453-kube-api-access-xxpm8\") pod \"nmstate-operator-75c5dccd6c-r7rl4\" (UID: \"77f7460a-7462-42ea-8dd6-32340fc3c453\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" Feb 26 14:32:46 crc kubenswrapper[4809]: I0226 14:32:46.570723 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" Feb 26 14:32:47 crc kubenswrapper[4809]: I0226 14:32:47.012804 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4"] Feb 26 14:32:47 crc kubenswrapper[4809]: I0226 14:32:47.327291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" event={"ID":"77f7460a-7462-42ea-8dd6-32340fc3c453","Type":"ContainerStarted","Data":"c597f00787610151896605fecda2714d6fa22f8dbc6d0913ad26abf5fa8e98b9"} Feb 26 14:32:47 crc kubenswrapper[4809]: E0226 14:32:47.498338 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:32:48 crc kubenswrapper[4809]: E0226 14:32:48.104667 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:32:48 crc kubenswrapper[4809]: E0226 14:32:48.104822 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:32:49 crc kubenswrapper[4809]: I0226 14:32:49.613701 4809 scope.go:117] "RemoveContainer" containerID="24c9490053db79a15b9c8554014251d097965d77984cca65d207015db15eba90" Feb 26 14:32:51 crc kubenswrapper[4809]: I0226 14:32:51.372135 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" event={"ID":"77f7460a-7462-42ea-8dd6-32340fc3c453","Type":"ContainerStarted","Data":"1f66c081a79ca41576782c5974de4205606a3fd7e7a4441ac4e2b46c623c21c0"} Feb 26 14:32:51 crc kubenswrapper[4809]: I0226 14:32:51.408828 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-r7rl4" podStartSLOduration=1.793046211 podStartE2EDuration="5.408806764s" podCreationTimestamp="2026-02-26 14:32:46 +0000 UTC" firstStartedPulling="2026-02-26 14:32:47.014563347 +0000 UTC m=+1145.487883870" lastFinishedPulling="2026-02-26 14:32:50.63032389 +0000 UTC m=+1149.103644423" observedRunningTime="2026-02-26 14:32:51.397183006 +0000 UTC m=+1149.870503549" watchObservedRunningTime="2026-02-26 14:32:51.408806764 +0000 UTC m=+1149.882127307" Feb 26 14:32:53 crc kubenswrapper[4809]: E0226 14:32:53.876919 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.600121 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8gnsv"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.602044 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.605231 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pvrwj" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.618148 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8gnsv"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.626403 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-5m958"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.627772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.635391 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.642208 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jz4x7"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.643422 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.665570 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-5m958"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-nmstate-lock\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760190 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760228 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ckl\" (UniqueName: \"kubernetes.io/projected/665daf18-37e2-42cb-9d28-671eed0de9ae-kube-api-access-25ckl\") pod \"nmstate-metrics-69594cc75-8gnsv\" (UID: \"665daf18-37e2-42cb-9d28-671eed0de9ae\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55hk\" (UniqueName: \"kubernetes.io/projected/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-kube-api-access-r55hk\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760357 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-ovs-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-dbus-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.760414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59z5\" (UniqueName: \"kubernetes.io/projected/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-kube-api-access-r59z5\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.774315 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.775351 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.777521 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.777659 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.777756 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jbt7p" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.783614 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st"] Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-nmstate-lock\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861757 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861794 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b68ecb6-4527-43b3-9383-605a44c377a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-nmstate-lock\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861817 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ckl\" (UniqueName: \"kubernetes.io/projected/665daf18-37e2-42cb-9d28-671eed0de9ae-kube-api-access-25ckl\") pod \"nmstate-metrics-69594cc75-8gnsv\" (UID: \"665daf18-37e2-42cb-9d28-671eed0de9ae\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861873 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55hk\" (UniqueName: \"kubernetes.io/projected/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-kube-api-access-r55hk\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861941 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjrj\" (UniqueName: \"kubernetes.io/projected/0b68ecb6-4527-43b3-9383-605a44c377a4-kube-api-access-smjrj\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861966 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-ovs-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.861993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-dbus-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.862038 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r59z5\" (UniqueName: \"kubernetes.io/projected/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-kube-api-access-r59z5\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.862075 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b68ecb6-4527-43b3-9383-605a44c377a4-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.862126 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-ovs-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.862413 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-dbus-socket\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.868483 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.881654 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ckl\" (UniqueName: \"kubernetes.io/projected/665daf18-37e2-42cb-9d28-671eed0de9ae-kube-api-access-25ckl\") pod \"nmstate-metrics-69594cc75-8gnsv\" (UID: \"665daf18-37e2-42cb-9d28-671eed0de9ae\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.884858 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r59z5\" (UniqueName: \"kubernetes.io/projected/4ce72366-e1aa-4a1a-ae00-1ff3e592c4df-kube-api-access-r59z5\") pod \"nmstate-handler-jz4x7\" (UID: \"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df\") " pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.896981 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55hk\" (UniqueName: \"kubernetes.io/projected/ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82-kube-api-access-r55hk\") pod \"nmstate-webhook-786f45cff4-5m958\" (UID: \"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.923690 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.945958 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.964038 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b68ecb6-4527-43b3-9383-605a44c377a4-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.964186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b68ecb6-4527-43b3-9383-605a44c377a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.964259 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smjrj\" (UniqueName: \"kubernetes.io/projected/0b68ecb6-4527-43b3-9383-605a44c377a4-kube-api-access-smjrj\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.965072 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0b68ecb6-4527-43b3-9383-605a44c377a4-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.968194 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.969645 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b68ecb6-4527-43b3-9383-605a44c377a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:55 crc kubenswrapper[4809]: I0226 14:32:55.992522 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smjrj\" (UniqueName: \"kubernetes.io/projected/0b68ecb6-4527-43b3-9383-605a44c377a4-kube-api-access-smjrj\") pod \"nmstate-console-plugin-5dcbbd79cf-sz7st\" (UID: \"0b68ecb6-4527-43b3-9383-605a44c377a4\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.015365 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.016356 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: W0226 14:32:56.045604 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce72366_e1aa_4a1a_ae00_1ff3e592c4df.slice/crio-ccef28d2bbd3d2c5735eda5bcc6214f20bd72278f68ea7902b0fd60ef336ecf4 WatchSource:0}: Error finding container ccef28d2bbd3d2c5735eda5bcc6214f20bd72278f68ea7902b0fd60ef336ecf4: Status 404 returned error can't find the container with id ccef28d2bbd3d2c5735eda5bcc6214f20bd72278f68ea7902b0fd60ef336ecf4 Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.053147 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.101677 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.166993 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168382 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168576 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.168696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wgd\" (UniqueName: \"kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.273967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274074 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wgd\" (UniqueName: \"kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274371 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.274418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.278298 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.283138 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.290527 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-5m958"] Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.292848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.293062 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.296420 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.310243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wgd\" (UniqueName: \"kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.312715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert\") pod \"console-75d699bb66-fpqsn\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.341772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.413922 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jz4x7" event={"ID":"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df","Type":"ContainerStarted","Data":"ccef28d2bbd3d2c5735eda5bcc6214f20bd72278f68ea7902b0fd60ef336ecf4"} Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.418247 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" event={"ID":"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82","Type":"ContainerStarted","Data":"a52d2123e88185a983e290aa55f1a4b9264e59b2d1c2d0ef2ac7ed5d0378c11d"} Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.537879 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-8gnsv"] Feb 26 14:32:56 crc kubenswrapper[4809]: W0226 14:32:56.564221 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665daf18_37e2_42cb_9d28_671eed0de9ae.slice/crio-0f96439e5f2af9b4a83c43c629f0f0992985580e5b096fa006ed60d7ee0d9a6b WatchSource:0}: Error finding container 0f96439e5f2af9b4a83c43c629f0f0992985580e5b096fa006ed60d7ee0d9a6b: Status 404 returned error can't find the container with id 0f96439e5f2af9b4a83c43c629f0f0992985580e5b096fa006ed60d7ee0d9a6b Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.611023 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:32:56 crc kubenswrapper[4809]: W0226 14:32:56.623374 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cc121f0_eb4d_4178_bb80_11c1e85e812d.slice/crio-8f1ea5f6fc066079f0a950f0fa748758ee40e72e067f898a4dbc2199d24e5c50 WatchSource:0}: Error finding container 8f1ea5f6fc066079f0a950f0fa748758ee40e72e067f898a4dbc2199d24e5c50: Status 404 returned error can't find the container with id 8f1ea5f6fc066079f0a950f0fa748758ee40e72e067f898a4dbc2199d24e5c50 Feb 26 14:32:56 crc kubenswrapper[4809]: I0226 14:32:56.645919 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st"] Feb 26 14:32:56 crc kubenswrapper[4809]: W0226 14:32:56.656633 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b68ecb6_4527_43b3_9383_605a44c377a4.slice/crio-e64641fbf04f711ff1a62930ecf4e8654b6b980bc8357a0e0d41ae2a99d86a9d WatchSource:0}: Error finding container e64641fbf04f711ff1a62930ecf4e8654b6b980bc8357a0e0d41ae2a99d86a9d: Status 404 returned error can't find the container with id e64641fbf04f711ff1a62930ecf4e8654b6b980bc8357a0e0d41ae2a99d86a9d Feb 26 14:32:57 crc kubenswrapper[4809]: I0226 14:32:57.427777 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" event={"ID":"0b68ecb6-4527-43b3-9383-605a44c377a4","Type":"ContainerStarted","Data":"e64641fbf04f711ff1a62930ecf4e8654b6b980bc8357a0e0d41ae2a99d86a9d"} Feb 26 14:32:57 crc kubenswrapper[4809]: I0226 14:32:57.429003 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" event={"ID":"665daf18-37e2-42cb-9d28-671eed0de9ae","Type":"ContainerStarted","Data":"0f96439e5f2af9b4a83c43c629f0f0992985580e5b096fa006ed60d7ee0d9a6b"} Feb 26 14:32:57 crc kubenswrapper[4809]: I0226 14:32:57.430698 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d699bb66-fpqsn" event={"ID":"3cc121f0-eb4d-4178-bb80-11c1e85e812d","Type":"ContainerStarted","Data":"8b9b8f594287ceec31be0a0d4f5420722377d31d1bdeeb67c217407cb2dd7888"} Feb 26 14:32:57 crc kubenswrapper[4809]: I0226 14:32:57.430836 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d699bb66-fpqsn" event={"ID":"3cc121f0-eb4d-4178-bb80-11c1e85e812d","Type":"ContainerStarted","Data":"8f1ea5f6fc066079f0a950f0fa748758ee40e72e067f898a4dbc2199d24e5c50"} Feb 26 14:32:57 crc kubenswrapper[4809]: I0226 14:32:57.458347 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75d699bb66-fpqsn" podStartSLOduration=2.458323823 podStartE2EDuration="2.458323823s" podCreationTimestamp="2026-02-26 14:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:32:57.449107972 +0000 UTC m=+1155.922428495" watchObservedRunningTime="2026-02-26 14:32:57.458323823 +0000 UTC m=+1155.931644346" Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.458767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jz4x7" event={"ID":"4ce72366-e1aa-4a1a-ae00-1ff3e592c4df","Type":"ContainerStarted","Data":"ce4304fcb24683da587b1ad9fad211f5dc7a85d4fe77b13673acc55f01b9dda0"} Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.459464 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.461627 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" event={"ID":"0b68ecb6-4527-43b3-9383-605a44c377a4","Type":"ContainerStarted","Data":"951e17ede8762b8d20fffdd72aa49af722b3caa754325f5d6c0bb01bb973edb6"} Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.463395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" event={"ID":"ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82","Type":"ContainerStarted","Data":"7b3d519a5ff60d0166534108002ab4cf1be2b7b8374575b5d46edd4300a12ee3"} Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.463528 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.465046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" event={"ID":"665daf18-37e2-42cb-9d28-671eed0de9ae","Type":"ContainerStarted","Data":"b6196c3655a19ca7f320762dbf05ccbfb08ebefa55dbf963d3cd8f9b9960a188"} Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.481317 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jz4x7" podStartSLOduration=1.925434525 podStartE2EDuration="6.481300647s" podCreationTimestamp="2026-02-26 14:32:55 +0000 UTC" firstStartedPulling="2026-02-26 14:32:56.052705344 +0000 UTC m=+1154.526025867" lastFinishedPulling="2026-02-26 14:33:00.608571466 +0000 UTC m=+1159.081891989" observedRunningTime="2026-02-26 14:33:01.473117075 +0000 UTC m=+1159.946437618" watchObservedRunningTime="2026-02-26 14:33:01.481300647 +0000 UTC m=+1159.954621170" Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.503081 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" podStartSLOduration=2.281468597 podStartE2EDuration="6.503062453s" podCreationTimestamp="2026-02-26 14:32:55 +0000 UTC" firstStartedPulling="2026-02-26 14:32:56.304204819 +0000 UTC m=+1154.777525342" lastFinishedPulling="2026-02-26 14:33:00.525798675 +0000 UTC m=+1158.999119198" observedRunningTime="2026-02-26 14:33:01.50296767 +0000 UTC m=+1159.976288193" watchObservedRunningTime="2026-02-26 14:33:01.503062453 +0000 UTC m=+1159.976382976" Feb 26 14:33:01 crc kubenswrapper[4809]: I0226 14:33:01.504846 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sz7st" podStartSLOduration=2.663958409 podStartE2EDuration="6.504834903s" podCreationTimestamp="2026-02-26 14:32:55 +0000 UTC" firstStartedPulling="2026-02-26 14:32:56.685555289 +0000 UTC m=+1155.158875812" lastFinishedPulling="2026-02-26 14:33:00.526431783 +0000 UTC m=+1158.999752306" observedRunningTime="2026-02-26 14:33:01.488646585 +0000 UTC m=+1159.961967108" watchObservedRunningTime="2026-02-26 14:33:01.504834903 +0000 UTC m=+1159.978155436" Feb 26 14:33:02 crc kubenswrapper[4809]: E0226 14:33:02.558240 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:03 crc kubenswrapper[4809]: E0226 14:33:03.916404 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:04 crc kubenswrapper[4809]: I0226 14:33:04.489493 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" event={"ID":"665daf18-37e2-42cb-9d28-671eed0de9ae","Type":"ContainerStarted","Data":"db592cfc8b576748319672fb0759f334141a032e018b94f9c1b7536fe90ac85e"} Feb 26 14:33:04 crc kubenswrapper[4809]: I0226 14:33:04.510184 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-8gnsv" podStartSLOduration=2.058545791 podStartE2EDuration="9.510165606s" podCreationTimestamp="2026-02-26 14:32:55 +0000 UTC" firstStartedPulling="2026-02-26 14:32:56.566663805 +0000 UTC m=+1155.039984328" lastFinishedPulling="2026-02-26 14:33:04.01828362 +0000 UTC m=+1162.491604143" observedRunningTime="2026-02-26 14:33:04.506570945 +0000 UTC m=+1162.979891528" watchObservedRunningTime="2026-02-26 14:33:04.510165606 +0000 UTC m=+1162.983486129" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.004101 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.343033 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.343399 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.348720 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.508060 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:33:06 crc kubenswrapper[4809]: I0226 14:33:06.573974 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:33:14 crc kubenswrapper[4809]: E0226 14:33:14.116886 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:15 crc kubenswrapper[4809]: I0226 14:33:15.953403 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 14:33:17 crc kubenswrapper[4809]: E0226 14:33:17.504506 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:24 crc kubenswrapper[4809]: E0226 14:33:24.284731 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:31 crc kubenswrapper[4809]: I0226 14:33:31.627233 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76fc989f8f-jg8s9" podUID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" containerName="console" containerID="cri-o://8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483" gracePeriod=15 Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.071853 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76fc989f8f-jg8s9_045c9e58-274e-4032-bbe3-4c63cdc9be1a/console/0.log" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.072214 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.109903 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110071 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110112 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110164 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9sh9\" (UniqueName: \"kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110205 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110240 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.110321 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config\") pod \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\" (UID: \"045c9e58-274e-4032-bbe3-4c63cdc9be1a\") " Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.111177 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.111244 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca" (OuterVolumeSpecName: "service-ca") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.111239 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.111289 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config" (OuterVolumeSpecName: "console-config") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.115962 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.136323 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.136630 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9" (OuterVolumeSpecName: "kube-api-access-l9sh9") pod "045c9e58-274e-4032-bbe3-4c63cdc9be1a" (UID: "045c9e58-274e-4032-bbe3-4c63cdc9be1a"). InnerVolumeSpecName "kube-api-access-l9sh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.212618 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9sh9\" (UniqueName: \"kubernetes.io/projected/045c9e58-274e-4032-bbe3-4c63cdc9be1a-kube-api-access-l9sh9\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.212876 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.213040 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.213139 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.213236 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.213342 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/045c9e58-274e-4032-bbe3-4c63cdc9be1a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.213440 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/045c9e58-274e-4032-bbe3-4c63cdc9be1a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:32 crc kubenswrapper[4809]: E0226 14:33:32.662478 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715309 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76fc989f8f-jg8s9_045c9e58-274e-4032-bbe3-4c63cdc9be1a/console/0.log" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715371 4809 generic.go:334] "Generic (PLEG): container finished" podID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" containerID="8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483" exitCode=2 Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715406 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-jg8s9" event={"ID":"045c9e58-274e-4032-bbe3-4c63cdc9be1a","Type":"ContainerDied","Data":"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483"} Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-jg8s9" event={"ID":"045c9e58-274e-4032-bbe3-4c63cdc9be1a","Type":"ContainerDied","Data":"234e1d9e2d995bfc3a06cfc7cb913b32320c2f4aaf003db759da7a2c6152b5a0"} Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715448 4809 scope.go:117] "RemoveContainer" containerID="8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.715453 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-jg8s9" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.738179 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.741663 4809 scope.go:117] "RemoveContainer" containerID="8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483" Feb 26 14:33:32 crc kubenswrapper[4809]: E0226 14:33:32.744453 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483\": container with ID starting with 8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483 not found: ID does not exist" containerID="8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.744498 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483"} err="failed to get container status \"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483\": rpc error: code = NotFound desc = could not find container \"8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483\": container with ID starting with 8235ed4741e2527d5cca260ee9502b4eb3d9c19e35252fe65545a168e9ab9483 not found: ID does not exist" Feb 26 14:33:32 crc kubenswrapper[4809]: I0226 14:33:32.744692 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76fc989f8f-jg8s9"] Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.060489 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7"] Feb 26 14:33:33 crc kubenswrapper[4809]: E0226 14:33:33.060763 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" containerName="console" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.060778 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" containerName="console" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.060919 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" containerName="console" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.062058 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.063785 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.072348 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7"] Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.130389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.130634 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwnp\" (UniqueName: \"kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.130696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.232274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.232379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwnp\" (UniqueName: \"kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.232411 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.232862 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.232878 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.253172 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwnp\" (UniqueName: \"kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.380333 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.635271 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7"] Feb 26 14:33:33 crc kubenswrapper[4809]: I0226 14:33:33.723597 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" event={"ID":"a9e32e0e-6f30-4d37-b75d-cff50247395f","Type":"ContainerStarted","Data":"91b6c73b715032942bb142aa7bf4f650e4bcae5b6eb26e6cce2a1308092c64d1"} Feb 26 14:33:34 crc kubenswrapper[4809]: I0226 14:33:34.267127 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="045c9e58-274e-4032-bbe3-4c63cdc9be1a" path="/var/lib/kubelet/pods/045c9e58-274e-4032-bbe3-4c63cdc9be1a/volumes" Feb 26 14:33:34 crc kubenswrapper[4809]: E0226 14:33:34.316129 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4db5e64_e72e_41b4_ad05_d1ed0ffdcdf7.slice/crio-c27dbe8dfd2f7d7c8ac32fe1bf26170645779780a283f8c7a6c50b883258746a\": RecentStats: unable to find data in memory cache]" Feb 26 14:33:34 crc kubenswrapper[4809]: I0226 14:33:34.731851 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerID="edbb245cfc022d1649347bcc049418c6831bc052be1b3541ed37798911bbec9a" exitCode=0 Feb 26 14:33:34 crc kubenswrapper[4809]: I0226 14:33:34.733252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" event={"ID":"a9e32e0e-6f30-4d37-b75d-cff50247395f","Type":"ContainerDied","Data":"edbb245cfc022d1649347bcc049418c6831bc052be1b3541ed37798911bbec9a"} Feb 26 14:33:34 crc kubenswrapper[4809]: I0226 14:33:34.734331 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:33:36 crc kubenswrapper[4809]: I0226 14:33:36.746648 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerID="0e97631a2dc86a1bb7edc9883e4c2a175eff9f304432a058f74d2eac202ec0e0" exitCode=0 Feb 26 14:33:36 crc kubenswrapper[4809]: I0226 14:33:36.746727 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" event={"ID":"a9e32e0e-6f30-4d37-b75d-cff50247395f","Type":"ContainerDied","Data":"0e97631a2dc86a1bb7edc9883e4c2a175eff9f304432a058f74d2eac202ec0e0"} Feb 26 14:33:37 crc kubenswrapper[4809]: I0226 14:33:37.756386 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerID="576dda9bb6f0d87c168054baab370a7e236c4c287ce6551094202e84fa7a3487" exitCode=0 Feb 26 14:33:37 crc kubenswrapper[4809]: I0226 14:33:37.756729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" event={"ID":"a9e32e0e-6f30-4d37-b75d-cff50247395f","Type":"ContainerDied","Data":"576dda9bb6f0d87c168054baab370a7e236c4c287ce6551094202e84fa7a3487"} Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.035175 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.144650 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle\") pod \"a9e32e0e-6f30-4d37-b75d-cff50247395f\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.144841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwnp\" (UniqueName: \"kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp\") pod \"a9e32e0e-6f30-4d37-b75d-cff50247395f\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.144927 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util\") pod \"a9e32e0e-6f30-4d37-b75d-cff50247395f\" (UID: \"a9e32e0e-6f30-4d37-b75d-cff50247395f\") " Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.145925 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle" (OuterVolumeSpecName: "bundle") pod "a9e32e0e-6f30-4d37-b75d-cff50247395f" (UID: "a9e32e0e-6f30-4d37-b75d-cff50247395f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.151245 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp" (OuterVolumeSpecName: "kube-api-access-knwnp") pod "a9e32e0e-6f30-4d37-b75d-cff50247395f" (UID: "a9e32e0e-6f30-4d37-b75d-cff50247395f"). InnerVolumeSpecName "kube-api-access-knwnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.162861 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util" (OuterVolumeSpecName: "util") pod "a9e32e0e-6f30-4d37-b75d-cff50247395f" (UID: "a9e32e0e-6f30-4d37-b75d-cff50247395f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.246866 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwnp\" (UniqueName: \"kubernetes.io/projected/a9e32e0e-6f30-4d37-b75d-cff50247395f-kube-api-access-knwnp\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.246910 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.246927 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9e32e0e-6f30-4d37-b75d-cff50247395f-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.772629 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" event={"ID":"a9e32e0e-6f30-4d37-b75d-cff50247395f","Type":"ContainerDied","Data":"91b6c73b715032942bb142aa7bf4f650e4bcae5b6eb26e6cce2a1308092c64d1"} Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.772670 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b6c73b715032942bb142aa7bf4f650e4bcae5b6eb26e6cce2a1308092c64d1" Feb 26 14:33:39 crc kubenswrapper[4809]: I0226 14:33:39.772708 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.978079 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm"] Feb 26 14:33:49 crc kubenswrapper[4809]: E0226 14:33:49.978813 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="pull" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.978826 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="pull" Feb 26 14:33:49 crc kubenswrapper[4809]: E0226 14:33:49.978851 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="util" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.978857 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="util" Feb 26 14:33:49 crc kubenswrapper[4809]: E0226 14:33:49.978868 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="extract" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.978874 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="extract" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.978993 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e32e0e-6f30-4d37-b75d-cff50247395f" containerName="extract" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.979527 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.982459 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.982466 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4qj57" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.982589 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.982624 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.983446 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 14:33:49 crc kubenswrapper[4809]: I0226 14:33:49.992380 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm"] Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.053194 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxg2\" (UniqueName: \"kubernetes.io/projected/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-kube-api-access-2zxg2\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.053334 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-apiservice-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.053374 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-webhook-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.154855 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-apiservice-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.154904 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-webhook-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.155024 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxg2\" (UniqueName: \"kubernetes.io/projected/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-kube-api-access-2zxg2\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.161624 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-webhook-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.161797 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-apiservice-cert\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.182666 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxg2\" (UniqueName: \"kubernetes.io/projected/19bdfc76-4c2f-4ef8-890e-84d3a6f5b895-kube-api-access-2zxg2\") pod \"metallb-operator-controller-manager-78c95b4464-fclfm\" (UID: \"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895\") " pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.253823 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb"] Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.254808 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.257405 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-knm56" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.257618 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.257790 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.271602 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb"] Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.296521 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.359418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-webhook-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.359454 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6l8s\" (UniqueName: \"kubernetes.io/projected/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-kube-api-access-j6l8s\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.359548 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-apiservice-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.461159 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-apiservice-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.461601 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-webhook-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.461622 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6l8s\" (UniqueName: \"kubernetes.io/projected/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-kube-api-access-j6l8s\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.471060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-webhook-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.477898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-apiservice-cert\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.480531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6l8s\" (UniqueName: \"kubernetes.io/projected/ba3c9bcd-2859-4815-ba37-d6337eb78ec1-kube-api-access-j6l8s\") pod \"metallb-operator-webhook-server-6fc554dcbc-rqcfb\" (UID: \"ba3c9bcd-2859-4815-ba37-d6337eb78ec1\") " pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.574790 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:50 crc kubenswrapper[4809]: I0226 14:33:50.924319 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm"] Feb 26 14:33:51 crc kubenswrapper[4809]: I0226 14:33:51.114210 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb"] Feb 26 14:33:51 crc kubenswrapper[4809]: I0226 14:33:51.887553 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" event={"ID":"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895","Type":"ContainerStarted","Data":"a2ab9ac1b6552258aebdaff5be0989e78f16401606063eaaa3c9950f88e7f5e6"} Feb 26 14:33:51 crc kubenswrapper[4809]: I0226 14:33:51.889402 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" event={"ID":"ba3c9bcd-2859-4815-ba37-d6337eb78ec1","Type":"ContainerStarted","Data":"eab7389d0f6e37efd5fa9d7517a15db2046c2c83ec91979fe8295f2dc9dfa6df"} Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.940913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" event={"ID":"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895","Type":"ContainerStarted","Data":"6b6caff2b63580a28efefec471a1d8270f54d2cc88be2785f30ae408caa78df5"} Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.941628 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.942568 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" event={"ID":"ba3c9bcd-2859-4815-ba37-d6337eb78ec1","Type":"ContainerStarted","Data":"ebe69d1c0838196e90862c2730a0bfe432eacb5086a2b0fe333b9be9c880d7c5"} Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.942869 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.960348 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" podStartSLOduration=2.948909503 podStartE2EDuration="8.9603329s" podCreationTimestamp="2026-02-26 14:33:49 +0000 UTC" firstStartedPulling="2026-02-26 14:33:50.902220133 +0000 UTC m=+1209.375540656" lastFinishedPulling="2026-02-26 14:33:56.91364353 +0000 UTC m=+1215.386964053" observedRunningTime="2026-02-26 14:33:57.958235531 +0000 UTC m=+1216.431556054" watchObservedRunningTime="2026-02-26 14:33:57.9603329 +0000 UTC m=+1216.433653423" Feb 26 14:33:57 crc kubenswrapper[4809]: I0226 14:33:57.984958 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podStartSLOduration=2.18049155 podStartE2EDuration="7.984943417s" podCreationTimestamp="2026-02-26 14:33:50 +0000 UTC" firstStartedPulling="2026-02-26 14:33:51.117490397 +0000 UTC m=+1209.590810920" lastFinishedPulling="2026-02-26 14:33:56.921942264 +0000 UTC m=+1215.395262787" observedRunningTime="2026-02-26 14:33:57.981918441 +0000 UTC m=+1216.455238964" watchObservedRunningTime="2026-02-26 14:33:57.984943417 +0000 UTC m=+1216.458263940" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.160155 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535274-k2m6c"] Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.161514 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.163948 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.164174 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.164220 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.175949 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-k2m6c"] Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.232214 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqqh\" (UniqueName: \"kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh\") pod \"auto-csr-approver-29535274-k2m6c\" (UID: \"8e044d4b-4f62-464c-b887-005d79ce073c\") " pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.334585 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqqh\" (UniqueName: \"kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh\") pod \"auto-csr-approver-29535274-k2m6c\" (UID: \"8e044d4b-4f62-464c-b887-005d79ce073c\") " pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.366756 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqqh\" (UniqueName: \"kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh\") pod \"auto-csr-approver-29535274-k2m6c\" (UID: \"8e044d4b-4f62-464c-b887-005d79ce073c\") " pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.491143 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.740531 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-k2m6c"] Feb 26 14:34:00 crc kubenswrapper[4809]: I0226 14:34:00.990540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" event={"ID":"8e044d4b-4f62-464c-b887-005d79ce073c","Type":"ContainerStarted","Data":"eaa3901c114b3576298e36117ddb6b7994a7f44d110b9454ab47560c24c778bb"} Feb 26 14:34:03 crc kubenswrapper[4809]: I0226 14:34:03.006749 4809 generic.go:334] "Generic (PLEG): container finished" podID="8e044d4b-4f62-464c-b887-005d79ce073c" containerID="8480b241e94427a84f2387e3a1498b9c2bd4e481e2660bcc4f13d34a81953c00" exitCode=0 Feb 26 14:34:03 crc kubenswrapper[4809]: I0226 14:34:03.007377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" event={"ID":"8e044d4b-4f62-464c-b887-005d79ce073c","Type":"ContainerDied","Data":"8480b241e94427a84f2387e3a1498b9c2bd4e481e2660bcc4f13d34a81953c00"} Feb 26 14:34:04 crc kubenswrapper[4809]: I0226 14:34:04.336120 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:04 crc kubenswrapper[4809]: I0226 14:34:04.434072 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbqqh\" (UniqueName: \"kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh\") pod \"8e044d4b-4f62-464c-b887-005d79ce073c\" (UID: \"8e044d4b-4f62-464c-b887-005d79ce073c\") " Feb 26 14:34:04 crc kubenswrapper[4809]: I0226 14:34:04.440875 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh" (OuterVolumeSpecName: "kube-api-access-dbqqh") pod "8e044d4b-4f62-464c-b887-005d79ce073c" (UID: "8e044d4b-4f62-464c-b887-005d79ce073c"). InnerVolumeSpecName "kube-api-access-dbqqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:34:04 crc kubenswrapper[4809]: I0226 14:34:04.536088 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbqqh\" (UniqueName: \"kubernetes.io/projected/8e044d4b-4f62-464c-b887-005d79ce073c-kube-api-access-dbqqh\") on node \"crc\" DevicePath \"\"" Feb 26 14:34:05 crc kubenswrapper[4809]: I0226 14:34:05.023681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" event={"ID":"8e044d4b-4f62-464c-b887-005d79ce073c","Type":"ContainerDied","Data":"eaa3901c114b3576298e36117ddb6b7994a7f44d110b9454ab47560c24c778bb"} Feb 26 14:34:05 crc kubenswrapper[4809]: I0226 14:34:05.023980 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa3901c114b3576298e36117ddb6b7994a7f44d110b9454ab47560c24c778bb" Feb 26 14:34:05 crc kubenswrapper[4809]: I0226 14:34:05.023754 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535274-k2m6c" Feb 26 14:34:05 crc kubenswrapper[4809]: I0226 14:34:05.398721 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-48gcc"] Feb 26 14:34:05 crc kubenswrapper[4809]: I0226 14:34:05.405688 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535268-48gcc"] Feb 26 14:34:06 crc kubenswrapper[4809]: I0226 14:34:06.267194 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821c1a70-89c5-433a-ae42-800d966fdbe2" path="/var/lib/kubelet/pods/821c1a70-89c5-433a-ae42-800d966fdbe2/volumes" Feb 26 14:34:10 crc kubenswrapper[4809]: I0226 14:34:10.579818 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 14:34:30 crc kubenswrapper[4809]: I0226 14:34:30.305670 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.048343 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6"] Feb 26 14:34:31 crc kubenswrapper[4809]: E0226 14:34:31.048731 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e044d4b-4f62-464c-b887-005d79ce073c" containerName="oc" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.048749 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e044d4b-4f62-464c-b887-005d79ce073c" containerName="oc" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.048911 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e044d4b-4f62-464c-b887-005d79ce073c" containerName="oc" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.049649 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.051948 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.052051 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gljlq" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.058520 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xpd62"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.062248 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.064626 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.065119 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.101828 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.184226 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kwnwr"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.185576 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.187886 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.188058 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.188457 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-chhhz" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.188493 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.208373 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-v7drb"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.209963 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.212381 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.225424 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-v7drb"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0457c9d-5a38-464b-92ca-da334aae1915-frr-startup\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253537 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd7b\" (UniqueName: \"kubernetes.io/projected/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-kube-api-access-vnd7b\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-cert\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253618 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/629e9f19-72e1-497b-a156-51a0ed359d4c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-metrics-certs\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-reloader\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253883 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkp6\" (UniqueName: \"kubernetes.io/projected/a0457c9d-5a38-464b-92ca-da334aae1915-kube-api-access-zzkp6\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pzr6\" (UniqueName: \"kubernetes.io/projected/629e9f19-72e1-497b-a156-51a0ed359d4c-kube-api-access-5pzr6\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.253966 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metallb-excludel2\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254138 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-metrics\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254171 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6bg\" (UniqueName: \"kubernetes.io/projected/cd58f297-8233-45a5-8bd4-04621d1e1750-kube-api-access-lq6bg\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-sockets\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254207 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metrics-certs\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254250 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-conf\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.254284 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0457c9d-5a38-464b-92ca-da334aae1915-metrics-certs\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-metrics\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355843 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6bg\" (UniqueName: \"kubernetes.io/projected/cd58f297-8233-45a5-8bd4-04621d1e1750-kube-api-access-lq6bg\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-sockets\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355888 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metrics-certs\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-conf\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.355977 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0457c9d-5a38-464b-92ca-da334aae1915-metrics-certs\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356010 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0457c9d-5a38-464b-92ca-da334aae1915-frr-startup\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356118 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd7b\" (UniqueName: \"kubernetes.io/projected/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-kube-api-access-vnd7b\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356135 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-cert\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/629e9f19-72e1-497b-a156-51a0ed359d4c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356292 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-metrics-certs\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356326 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-reloader\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkp6\" (UniqueName: \"kubernetes.io/projected/a0457c9d-5a38-464b-92ca-da334aae1915-kube-api-access-zzkp6\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pzr6\" (UniqueName: \"kubernetes.io/projected/629e9f19-72e1-497b-a156-51a0ed359d4c-kube-api-access-5pzr6\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356476 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metallb-excludel2\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.356855 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-conf\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.357332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a0457c9d-5a38-464b-92ca-da334aae1915-frr-startup\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.357495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-frr-sockets\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.357593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-reloader\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: E0226 14:34:31.357782 4809 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.358526 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metallb-excludel2\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: E0226 14:34:31.359091 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist podName:e778875f-43d9-4ab5-9e0c-e561a3d4bd2f nodeName:}" failed. No retries permitted until 2026-02-26 14:34:31.858995229 +0000 UTC m=+1250.332315752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist") pod "speaker-kwnwr" (UID: "e778875f-43d9-4ab5-9e0c-e561a3d4bd2f") : secret "metallb-memberlist" not found Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.359571 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a0457c9d-5a38-464b-92ca-da334aae1915-metrics\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.362878 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-metrics-certs\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.363442 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd58f297-8233-45a5-8bd4-04621d1e1750-cert\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.364597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-metrics-certs\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.372744 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0457c9d-5a38-464b-92ca-da334aae1915-metrics-certs\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.375487 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/629e9f19-72e1-497b-a156-51a0ed359d4c-cert\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.375974 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6bg\" (UniqueName: \"kubernetes.io/projected/cd58f297-8233-45a5-8bd4-04621d1e1750-kube-api-access-lq6bg\") pod \"controller-86ddb6bd46-v7drb\" (UID: \"cd58f297-8233-45a5-8bd4-04621d1e1750\") " pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.376139 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pzr6\" (UniqueName: \"kubernetes.io/projected/629e9f19-72e1-497b-a156-51a0ed359d4c-kube-api-access-5pzr6\") pod \"frr-k8s-webhook-server-7f989f654f-pb8s6\" (UID: \"629e9f19-72e1-497b-a156-51a0ed359d4c\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.378570 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkp6\" (UniqueName: \"kubernetes.io/projected/a0457c9d-5a38-464b-92ca-da334aae1915-kube-api-access-zzkp6\") pod \"frr-k8s-xpd62\" (UID: \"a0457c9d-5a38-464b-92ca-da334aae1915\") " pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.381147 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd7b\" (UniqueName: \"kubernetes.io/projected/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-kube-api-access-vnd7b\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.382868 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.405970 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.533387 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:31 crc kubenswrapper[4809]: W0226 14:34:31.810922 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod629e9f19_72e1_497b_a156_51a0ed359d4c.slice/crio-9656ff7b04b13db0b328dfdf7a9c7f40c3b09adb043617dee45756029f8e66bf WatchSource:0}: Error finding container 9656ff7b04b13db0b328dfdf7a9c7f40c3b09adb043617dee45756029f8e66bf: Status 404 returned error can't find the container with id 9656ff7b04b13db0b328dfdf7a9c7f40c3b09adb043617dee45756029f8e66bf Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.811155 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6"] Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.865121 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:31 crc kubenswrapper[4809]: E0226 14:34:31.865301 4809 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 14:34:31 crc kubenswrapper[4809]: E0226 14:34:31.865378 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist podName:e778875f-43d9-4ab5-9e0c-e561a3d4bd2f nodeName:}" failed. No retries permitted until 2026-02-26 14:34:32.865358983 +0000 UTC m=+1251.338679516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist") pod "speaker-kwnwr" (UID: "e778875f-43d9-4ab5-9e0c-e561a3d4bd2f") : secret "metallb-memberlist" not found Feb 26 14:34:31 crc kubenswrapper[4809]: I0226 14:34:31.965001 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-v7drb"] Feb 26 14:34:31 crc kubenswrapper[4809]: W0226 14:34:31.969480 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd58f297_8233_45a5_8bd4_04621d1e1750.slice/crio-d02d343323b222095a2d35ff65f99b9a2e444559982c67da31123d47ac2d088c WatchSource:0}: Error finding container d02d343323b222095a2d35ff65f99b9a2e444559982c67da31123d47ac2d088c: Status 404 returned error can't find the container with id d02d343323b222095a2d35ff65f99b9a2e444559982c67da31123d47ac2d088c Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.239983 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v7drb" event={"ID":"cd58f297-8233-45a5-8bd4-04621d1e1750","Type":"ContainerStarted","Data":"d730982df9ff93f5c8710533d0ff1157818deba1bdc7181454fec6c1a6101fea"} Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.240382 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v7drb" event={"ID":"cd58f297-8233-45a5-8bd4-04621d1e1750","Type":"ContainerStarted","Data":"d02d343323b222095a2d35ff65f99b9a2e444559982c67da31123d47ac2d088c"} Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.244351 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"801f1c6956852da3259fd89099ee4d68dd04b7a1a6cee9c4e5df6544a9854712"} Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.247293 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" event={"ID":"629e9f19-72e1-497b-a156-51a0ed359d4c","Type":"ContainerStarted","Data":"9656ff7b04b13db0b328dfdf7a9c7f40c3b09adb043617dee45756029f8e66bf"} Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.879241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:32 crc kubenswrapper[4809]: I0226 14:34:32.885642 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e778875f-43d9-4ab5-9e0c-e561a3d4bd2f-memberlist\") pod \"speaker-kwnwr\" (UID: \"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f\") " pod="metallb-system/speaker-kwnwr" Feb 26 14:34:33 crc kubenswrapper[4809]: I0226 14:34:33.002968 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kwnwr" Feb 26 14:34:33 crc kubenswrapper[4809]: W0226 14:34:33.029792 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode778875f_43d9_4ab5_9e0c_e561a3d4bd2f.slice/crio-12ad578a1e1db34ff9f35dbb0183ca14836337857fc2af5ac5588e6d8264bbb4 WatchSource:0}: Error finding container 12ad578a1e1db34ff9f35dbb0183ca14836337857fc2af5ac5588e6d8264bbb4: Status 404 returned error can't find the container with id 12ad578a1e1db34ff9f35dbb0183ca14836337857fc2af5ac5588e6d8264bbb4 Feb 26 14:34:33 crc kubenswrapper[4809]: I0226 14:34:33.276824 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwnwr" event={"ID":"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f","Type":"ContainerStarted","Data":"12ad578a1e1db34ff9f35dbb0183ca14836337857fc2af5ac5588e6d8264bbb4"} Feb 26 14:34:33 crc kubenswrapper[4809]: I0226 14:34:33.284959 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v7drb" event={"ID":"cd58f297-8233-45a5-8bd4-04621d1e1750","Type":"ContainerStarted","Data":"7e43f341aafe04c11cac28e3b204262c9cfb7d6887b9b852937828885a059e70"} Feb 26 14:34:33 crc kubenswrapper[4809]: I0226 14:34:33.285209 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:33 crc kubenswrapper[4809]: I0226 14:34:33.307599 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-v7drb" podStartSLOduration=2.307417656 podStartE2EDuration="2.307417656s" podCreationTimestamp="2026-02-26 14:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:34:33.301610821 +0000 UTC m=+1251.774931344" watchObservedRunningTime="2026-02-26 14:34:33.307417656 +0000 UTC m=+1251.780738179" Feb 26 14:34:34 crc kubenswrapper[4809]: I0226 14:34:34.300912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwnwr" event={"ID":"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f","Type":"ContainerStarted","Data":"9ba0ad7f01286e6b66c75332444fdef0808c6688b716433deb279952265d7f07"} Feb 26 14:34:34 crc kubenswrapper[4809]: I0226 14:34:34.301232 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwnwr" event={"ID":"e778875f-43d9-4ab5-9e0c-e561a3d4bd2f","Type":"ContainerStarted","Data":"86bedb5aa7d000a5db9347ff6c2069088cfd256fc2e8ff03dd520a902af4c080"} Feb 26 14:34:34 crc kubenswrapper[4809]: I0226 14:34:34.301373 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kwnwr" Feb 26 14:34:34 crc kubenswrapper[4809]: I0226 14:34:34.322839 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kwnwr" podStartSLOduration=3.322819071 podStartE2EDuration="3.322819071s" podCreationTimestamp="2026-02-26 14:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:34:34.316047779 +0000 UTC m=+1252.789368302" watchObservedRunningTime="2026-02-26 14:34:34.322819071 +0000 UTC m=+1252.796139594" Feb 26 14:34:40 crc kubenswrapper[4809]: I0226 14:34:40.359156 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0457c9d-5a38-464b-92ca-da334aae1915" containerID="57f0dbbf44ffe77fb6bd271aad6a5144f788b860de914f7a0e31419363dd1450" exitCode=0 Feb 26 14:34:40 crc kubenswrapper[4809]: I0226 14:34:40.359426 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerDied","Data":"57f0dbbf44ffe77fb6bd271aad6a5144f788b860de914f7a0e31419363dd1450"} Feb 26 14:34:40 crc kubenswrapper[4809]: I0226 14:34:40.361730 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" event={"ID":"629e9f19-72e1-497b-a156-51a0ed359d4c","Type":"ContainerStarted","Data":"07590735da894fb0bd9e3a3a68615e0f4a3e5584f9736089841bf37bfd8e2937"} Feb 26 14:34:40 crc kubenswrapper[4809]: I0226 14:34:40.361869 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:40 crc kubenswrapper[4809]: I0226 14:34:40.407469 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" podStartSLOduration=1.576203306 podStartE2EDuration="9.407445279s" podCreationTimestamp="2026-02-26 14:34:31 +0000 UTC" firstStartedPulling="2026-02-26 14:34:31.813943498 +0000 UTC m=+1250.287264031" lastFinishedPulling="2026-02-26 14:34:39.645185481 +0000 UTC m=+1258.118506004" observedRunningTime="2026-02-26 14:34:40.406194584 +0000 UTC m=+1258.879515117" watchObservedRunningTime="2026-02-26 14:34:40.407445279 +0000 UTC m=+1258.880765812" Feb 26 14:34:41 crc kubenswrapper[4809]: I0226 14:34:41.371170 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0457c9d-5a38-464b-92ca-da334aae1915" containerID="9ed10f21aed48c87db9246d3cb30edfbcc2c7bef319a6b9075b6c0184f3563cf" exitCode=0 Feb 26 14:34:41 crc kubenswrapper[4809]: I0226 14:34:41.371411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerDied","Data":"9ed10f21aed48c87db9246d3cb30edfbcc2c7bef319a6b9075b6c0184f3563cf"} Feb 26 14:34:42 crc kubenswrapper[4809]: I0226 14:34:42.381291 4809 generic.go:334] "Generic (PLEG): container finished" podID="a0457c9d-5a38-464b-92ca-da334aae1915" containerID="7370b236b8fe9665c665bbfaa634ec62d51e30543756c4569cc2df9d96989746" exitCode=0 Feb 26 14:34:42 crc kubenswrapper[4809]: I0226 14:34:42.381390 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerDied","Data":"7370b236b8fe9665c665bbfaa634ec62d51e30543756c4569cc2df9d96989746"} Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.009033 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kwnwr" Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.399580 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"6fd720c3971ff9100c9f8cd0e08404c8bd77cd0b38540d13042a77c529ce8352"} Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.399621 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"dce85239d8b4fc502709b22d310c83521100fed47036ee0bb9749451cd212642"} Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.399631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"f8a4fd26daad8656a4c2f6a82d88d60c620ed12948e8538b5b3d7bff29cced78"} Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.399640 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"c48bf5ea3e232d7738e55543ed05e217800d3006f82474cf0ca2c3219b984e90"} Feb 26 14:34:43 crc kubenswrapper[4809]: I0226 14:34:43.399651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"464e9c4b0428583e24f1355cd049f8593f6dfe0096218b08dd07eb442842e779"} Feb 26 14:34:44 crc kubenswrapper[4809]: I0226 14:34:44.412113 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpd62" event={"ID":"a0457c9d-5a38-464b-92ca-da334aae1915","Type":"ContainerStarted","Data":"9ad2de9ff0687544e871f6145b103198cd4906dc82697753aa9bdf7498806846"} Feb 26 14:34:44 crc kubenswrapper[4809]: I0226 14:34:44.413081 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:44 crc kubenswrapper[4809]: I0226 14:34:44.442124 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xpd62" podStartSLOduration=5.437212856 podStartE2EDuration="13.442102556s" podCreationTimestamp="2026-02-26 14:34:31 +0000 UTC" firstStartedPulling="2026-02-26 14:34:31.621615943 +0000 UTC m=+1250.094936466" lastFinishedPulling="2026-02-26 14:34:39.626505643 +0000 UTC m=+1258.099826166" observedRunningTime="2026-02-26 14:34:44.435733296 +0000 UTC m=+1262.909053849" watchObservedRunningTime="2026-02-26 14:34:44.442102556 +0000 UTC m=+1262.915423079" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.239260 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.240784 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.245445 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-m4hwc" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.245509 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.245521 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.278878 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.328775 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmtr\" (UniqueName: \"kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr\") pod \"openstack-operator-index-rjfhq\" (UID: \"970c1d76-7bc1-407d-a305-06c9a64fbefe\") " pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.407201 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.430890 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmtr\" (UniqueName: \"kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr\") pod \"openstack-operator-index-rjfhq\" (UID: \"970c1d76-7bc1-407d-a305-06c9a64fbefe\") " pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.462874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmtr\" (UniqueName: \"kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr\") pod \"openstack-operator-index-rjfhq\" (UID: \"970c1d76-7bc1-407d-a305-06c9a64fbefe\") " pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.475756 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.570122 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:46 crc kubenswrapper[4809]: I0226 14:34:46.975968 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:46 crc kubenswrapper[4809]: W0226 14:34:46.979335 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970c1d76_7bc1_407d_a305_06c9a64fbefe.slice/crio-a76d32ebced7d5bc7137bd0cb625279d578131edcf0f4e6ee85d412fdd64e4bc WatchSource:0}: Error finding container a76d32ebced7d5bc7137bd0cb625279d578131edcf0f4e6ee85d412fdd64e4bc: Status 404 returned error can't find the container with id a76d32ebced7d5bc7137bd0cb625279d578131edcf0f4e6ee85d412fdd64e4bc Feb 26 14:34:47 crc kubenswrapper[4809]: I0226 14:34:47.441784 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rjfhq" event={"ID":"970c1d76-7bc1-407d-a305-06c9a64fbefe","Type":"ContainerStarted","Data":"a76d32ebced7d5bc7137bd0cb625279d578131edcf0f4e6ee85d412fdd64e4bc"} Feb 26 14:34:49 crc kubenswrapper[4809]: I0226 14:34:49.405536 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.008308 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bmlld"] Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.009523 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.020958 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bmlld"] Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.117601 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftlv\" (UniqueName: \"kubernetes.io/projected/f06c5375-eeef-461b-9dce-048a10de5770-kube-api-access-4ftlv\") pod \"openstack-operator-index-bmlld\" (UID: \"f06c5375-eeef-461b-9dce-048a10de5770\") " pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.219791 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ftlv\" (UniqueName: \"kubernetes.io/projected/f06c5375-eeef-461b-9dce-048a10de5770-kube-api-access-4ftlv\") pod \"openstack-operator-index-bmlld\" (UID: \"f06c5375-eeef-461b-9dce-048a10de5770\") " pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.235691 4809 scope.go:117] "RemoveContainer" containerID="59922ba4eba47d79d502158e9f929426733de1b7e1706263e6ade028c7f25244" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.247821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ftlv\" (UniqueName: \"kubernetes.io/projected/f06c5375-eeef-461b-9dce-048a10de5770-kube-api-access-4ftlv\") pod \"openstack-operator-index-bmlld\" (UID: \"f06c5375-eeef-461b-9dce-048a10de5770\") " pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.330652 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.466729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rjfhq" event={"ID":"970c1d76-7bc1-407d-a305-06c9a64fbefe","Type":"ContainerStarted","Data":"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6"} Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.467112 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rjfhq" podUID="970c1d76-7bc1-407d-a305-06c9a64fbefe" containerName="registry-server" containerID="cri-o://8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6" gracePeriod=2 Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.493840 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rjfhq" podStartSLOduration=1.9714540889999999 podStartE2EDuration="4.493776503s" podCreationTimestamp="2026-02-26 14:34:46 +0000 UTC" firstStartedPulling="2026-02-26 14:34:46.980935458 +0000 UTC m=+1265.454255981" lastFinishedPulling="2026-02-26 14:34:49.503257872 +0000 UTC m=+1267.976578395" observedRunningTime="2026-02-26 14:34:50.48447626 +0000 UTC m=+1268.957796783" watchObservedRunningTime="2026-02-26 14:34:50.493776503 +0000 UTC m=+1268.967097026" Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.837714 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bmlld"] Feb 26 14:34:50 crc kubenswrapper[4809]: I0226 14:34:50.992679 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.042950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvmtr\" (UniqueName: \"kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr\") pod \"970c1d76-7bc1-407d-a305-06c9a64fbefe\" (UID: \"970c1d76-7bc1-407d-a305-06c9a64fbefe\") " Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.048310 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr" (OuterVolumeSpecName: "kube-api-access-hvmtr") pod "970c1d76-7bc1-407d-a305-06c9a64fbefe" (UID: "970c1d76-7bc1-407d-a305-06c9a64fbefe"). InnerVolumeSpecName "kube-api-access-hvmtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.145408 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvmtr\" (UniqueName: \"kubernetes.io/projected/970c1d76-7bc1-407d-a305-06c9a64fbefe-kube-api-access-hvmtr\") on node \"crc\" DevicePath \"\"" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.388032 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.474715 4809 generic.go:334] "Generic (PLEG): container finished" podID="970c1d76-7bc1-407d-a305-06c9a64fbefe" containerID="8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6" exitCode=0 Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.474755 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rjfhq" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.474771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rjfhq" event={"ID":"970c1d76-7bc1-407d-a305-06c9a64fbefe","Type":"ContainerDied","Data":"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6"} Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.475174 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rjfhq" event={"ID":"970c1d76-7bc1-407d-a305-06c9a64fbefe","Type":"ContainerDied","Data":"a76d32ebced7d5bc7137bd0cb625279d578131edcf0f4e6ee85d412fdd64e4bc"} Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.475197 4809 scope.go:117] "RemoveContainer" containerID="8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.478893 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bmlld" event={"ID":"f06c5375-eeef-461b-9dce-048a10de5770","Type":"ContainerStarted","Data":"3f57702549625758ea97277e4b7bfb468e007a11abbcfb04be6e493e70204a58"} Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.478932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bmlld" event={"ID":"f06c5375-eeef-461b-9dce-048a10de5770","Type":"ContainerStarted","Data":"eaefadaf262b35026ee08cff5368f35ff9c33f3e72be26c96a2cf1e398440a2f"} Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.495871 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bmlld" podStartSLOduration=2.375255736 podStartE2EDuration="2.49585399s" podCreationTimestamp="2026-02-26 14:34:49 +0000 UTC" firstStartedPulling="2026-02-26 14:34:50.870444235 +0000 UTC m=+1269.343764768" lastFinishedPulling="2026-02-26 14:34:50.991042499 +0000 UTC m=+1269.464363022" observedRunningTime="2026-02-26 14:34:51.492536456 +0000 UTC m=+1269.965856979" watchObservedRunningTime="2026-02-26 14:34:51.49585399 +0000 UTC m=+1269.969174513" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.496946 4809 scope.go:117] "RemoveContainer" containerID="8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6" Feb 26 14:34:51 crc kubenswrapper[4809]: E0226 14:34:51.497797 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6\": container with ID starting with 8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6 not found: ID does not exist" containerID="8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.497819 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6"} err="failed to get container status \"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6\": rpc error: code = NotFound desc = could not find container \"8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6\": container with ID starting with 8371b3b960bb51783df89d22bdee6673212dee4d0819369ab7d0bd4372a08de6 not found: ID does not exist" Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.514810 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.520712 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rjfhq"] Feb 26 14:34:51 crc kubenswrapper[4809]: I0226 14:34:51.538301 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-v7drb" Feb 26 14:34:52 crc kubenswrapper[4809]: I0226 14:34:52.266495 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970c1d76-7bc1-407d-a305-06c9a64fbefe" path="/var/lib/kubelet/pods/970c1d76-7bc1-407d-a305-06c9a64fbefe/volumes" Feb 26 14:35:00 crc kubenswrapper[4809]: I0226 14:35:00.331155 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:35:00 crc kubenswrapper[4809]: I0226 14:35:00.331507 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:35:00 crc kubenswrapper[4809]: I0226 14:35:00.359981 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:35:00 crc kubenswrapper[4809]: I0226 14:35:00.591826 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bmlld" Feb 26 14:35:01 crc kubenswrapper[4809]: I0226 14:35:01.409941 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xpd62" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.678055 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7"] Feb 26 14:35:07 crc kubenswrapper[4809]: E0226 14:35:07.678552 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970c1d76-7bc1-407d-a305-06c9a64fbefe" containerName="registry-server" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.678564 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="970c1d76-7bc1-407d-a305-06c9a64fbefe" containerName="registry-server" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.678715 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="970c1d76-7bc1-407d-a305-06c9a64fbefe" containerName="registry-server" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.679739 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.682571 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zjw7x" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.691178 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7"] Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.733161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.733259 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7pp\" (UniqueName: \"kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.733311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.835673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.836031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l7pp\" (UniqueName: \"kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.836095 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.836392 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.836553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.857504 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l7pp\" (UniqueName: \"kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp\") pod \"74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:07 crc kubenswrapper[4809]: I0226 14:35:07.999661 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:08 crc kubenswrapper[4809]: I0226 14:35:08.459780 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7"] Feb 26 14:35:08 crc kubenswrapper[4809]: I0226 14:35:08.641694 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" event={"ID":"c84e8dc8-cb82-4203-9e89-56e191b7e072","Type":"ContainerStarted","Data":"b0c8119f179e8a3e81893630600c4dbeb6403f65f068d694275ab179c449da29"} Feb 26 14:35:09 crc kubenswrapper[4809]: I0226 14:35:09.650439 4809 generic.go:334] "Generic (PLEG): container finished" podID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerID="e8878e86eb76ef799b30026b42d389b75d297f0a0817c5d74d3c3599290a2dbe" exitCode=0 Feb 26 14:35:09 crc kubenswrapper[4809]: I0226 14:35:09.650489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" event={"ID":"c84e8dc8-cb82-4203-9e89-56e191b7e072","Type":"ContainerDied","Data":"e8878e86eb76ef799b30026b42d389b75d297f0a0817c5d74d3c3599290a2dbe"} Feb 26 14:35:10 crc kubenswrapper[4809]: I0226 14:35:10.660975 4809 generic.go:334] "Generic (PLEG): container finished" podID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerID="eb1058c29722320f9f0da0621bb46d2e41abb2a677376f3bc071d4e062fd709e" exitCode=0 Feb 26 14:35:10 crc kubenswrapper[4809]: I0226 14:35:10.661040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" event={"ID":"c84e8dc8-cb82-4203-9e89-56e191b7e072","Type":"ContainerDied","Data":"eb1058c29722320f9f0da0621bb46d2e41abb2a677376f3bc071d4e062fd709e"} Feb 26 14:35:11 crc kubenswrapper[4809]: I0226 14:35:11.701419 4809 generic.go:334] "Generic (PLEG): container finished" podID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerID="6e267da49e39f47929a2906178c50b25f794731685699eb19bae4b6508d57ad4" exitCode=0 Feb 26 14:35:11 crc kubenswrapper[4809]: I0226 14:35:11.701519 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" event={"ID":"c84e8dc8-cb82-4203-9e89-56e191b7e072","Type":"ContainerDied","Data":"6e267da49e39f47929a2906178c50b25f794731685699eb19bae4b6508d57ad4"} Feb 26 14:35:11 crc kubenswrapper[4809]: I0226 14:35:11.793638 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:35:11 crc kubenswrapper[4809]: I0226 14:35:11.793706 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.074693 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.233309 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l7pp\" (UniqueName: \"kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp\") pod \"c84e8dc8-cb82-4203-9e89-56e191b7e072\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.233422 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util\") pod \"c84e8dc8-cb82-4203-9e89-56e191b7e072\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.233547 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle\") pod \"c84e8dc8-cb82-4203-9e89-56e191b7e072\" (UID: \"c84e8dc8-cb82-4203-9e89-56e191b7e072\") " Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.234826 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle" (OuterVolumeSpecName: "bundle") pod "c84e8dc8-cb82-4203-9e89-56e191b7e072" (UID: "c84e8dc8-cb82-4203-9e89-56e191b7e072"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.240496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp" (OuterVolumeSpecName: "kube-api-access-5l7pp") pod "c84e8dc8-cb82-4203-9e89-56e191b7e072" (UID: "c84e8dc8-cb82-4203-9e89-56e191b7e072"). InnerVolumeSpecName "kube-api-access-5l7pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.247695 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util" (OuterVolumeSpecName: "util") pod "c84e8dc8-cb82-4203-9e89-56e191b7e072" (UID: "c84e8dc8-cb82-4203-9e89-56e191b7e072"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.335625 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l7pp\" (UniqueName: \"kubernetes.io/projected/c84e8dc8-cb82-4203-9e89-56e191b7e072-kube-api-access-5l7pp\") on node \"crc\" DevicePath \"\"" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.335679 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-util\") on node \"crc\" DevicePath \"\"" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.335693 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c84e8dc8-cb82-4203-9e89-56e191b7e072-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.721871 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" event={"ID":"c84e8dc8-cb82-4203-9e89-56e191b7e072","Type":"ContainerDied","Data":"b0c8119f179e8a3e81893630600c4dbeb6403f65f068d694275ab179c449da29"} Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.722252 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c8119f179e8a3e81893630600c4dbeb6403f65f068d694275ab179c449da29" Feb 26 14:35:13 crc kubenswrapper[4809]: I0226 14:35:13.721932 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.557523 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp"] Feb 26 14:35:20 crc kubenswrapper[4809]: E0226 14:35:20.558687 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="extract" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.558708 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="extract" Feb 26 14:35:20 crc kubenswrapper[4809]: E0226 14:35:20.558771 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="util" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.558781 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="util" Feb 26 14:35:20 crc kubenswrapper[4809]: E0226 14:35:20.558837 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="pull" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.558846 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="pull" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.559078 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84e8dc8-cb82-4203-9e89-56e191b7e072" containerName="extract" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.560031 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.566369 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-29mpc" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.586307 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp"] Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.661991 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6vn\" (UniqueName: \"kubernetes.io/projected/bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c-kube-api-access-gr6vn\") pod \"openstack-operator-controller-init-fd648b64f-xrqvp\" (UID: \"bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c\") " pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.764280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6vn\" (UniqueName: \"kubernetes.io/projected/bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c-kube-api-access-gr6vn\") pod \"openstack-operator-controller-init-fd648b64f-xrqvp\" (UID: \"bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c\") " pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.796089 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6vn\" (UniqueName: \"kubernetes.io/projected/bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c-kube-api-access-gr6vn\") pod \"openstack-operator-controller-init-fd648b64f-xrqvp\" (UID: \"bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c\") " pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:20 crc kubenswrapper[4809]: I0226 14:35:20.881193 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:21 crc kubenswrapper[4809]: I0226 14:35:21.351399 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp"] Feb 26 14:35:21 crc kubenswrapper[4809]: I0226 14:35:21.789499 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" event={"ID":"bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c","Type":"ContainerStarted","Data":"f5983220ab7bd32804bcde3fbc4a0b7cac30e169c66f992b3a990acfd7f9590b"} Feb 26 14:35:25 crc kubenswrapper[4809]: I0226 14:35:25.827699 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" event={"ID":"bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c","Type":"ContainerStarted","Data":"04951999470a358f6bde0e42b387bf6ccfde30452907a1794d7a14fcaa7f972b"} Feb 26 14:35:25 crc kubenswrapper[4809]: I0226 14:35:25.828756 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:25 crc kubenswrapper[4809]: I0226 14:35:25.874239 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" podStartSLOduration=2.000962176 podStartE2EDuration="5.874219628s" podCreationTimestamp="2026-02-26 14:35:20 +0000 UTC" firstStartedPulling="2026-02-26 14:35:21.352453486 +0000 UTC m=+1299.825774019" lastFinishedPulling="2026-02-26 14:35:25.225710948 +0000 UTC m=+1303.699031471" observedRunningTime="2026-02-26 14:35:25.863501903 +0000 UTC m=+1304.336822426" watchObservedRunningTime="2026-02-26 14:35:25.874219628 +0000 UTC m=+1304.347540141" Feb 26 14:35:30 crc kubenswrapper[4809]: I0226 14:35:30.883909 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" Feb 26 14:35:41 crc kubenswrapper[4809]: I0226 14:35:41.794312 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:35:41 crc kubenswrapper[4809]: I0226 14:35:41.794861 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.151009 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535276-vvq4j"] Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.152831 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.156702 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.156882 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.157397 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.166108 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-vvq4j"] Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.237109 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc958\" (UniqueName: \"kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958\") pod \"auto-csr-approver-29535276-vvq4j\" (UID: \"5eb73f4b-6f13-4340-a250-fd39e979a4e3\") " pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.338888 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc958\" (UniqueName: \"kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958\") pod \"auto-csr-approver-29535276-vvq4j\" (UID: \"5eb73f4b-6f13-4340-a250-fd39e979a4e3\") " pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.365305 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc958\" (UniqueName: \"kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958\") pod \"auto-csr-approver-29535276-vvq4j\" (UID: \"5eb73f4b-6f13-4340-a250-fd39e979a4e3\") " pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.474357 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:00 crc kubenswrapper[4809]: I0226 14:36:00.915264 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-vvq4j"] Feb 26 14:36:01 crc kubenswrapper[4809]: I0226 14:36:01.092372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" event={"ID":"5eb73f4b-6f13-4340-a250-fd39e979a4e3","Type":"ContainerStarted","Data":"032a0b25184c29ebdbb92ddd0f2e16f1d2594b7f3d67713850541f780007efb2"} Feb 26 14:36:03 crc kubenswrapper[4809]: I0226 14:36:03.105815 4809 generic.go:334] "Generic (PLEG): container finished" podID="5eb73f4b-6f13-4340-a250-fd39e979a4e3" containerID="3d97f6830958b6b69f453794d44eda14b7e134a1ba5b744b593848f3558bddac" exitCode=0 Feb 26 14:36:03 crc kubenswrapper[4809]: I0226 14:36:03.105916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" event={"ID":"5eb73f4b-6f13-4340-a250-fd39e979a4e3","Type":"ContainerDied","Data":"3d97f6830958b6b69f453794d44eda14b7e134a1ba5b744b593848f3558bddac"} Feb 26 14:36:04 crc kubenswrapper[4809]: I0226 14:36:04.395505 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:04 crc kubenswrapper[4809]: I0226 14:36:04.528163 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc958\" (UniqueName: \"kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958\") pod \"5eb73f4b-6f13-4340-a250-fd39e979a4e3\" (UID: \"5eb73f4b-6f13-4340-a250-fd39e979a4e3\") " Feb 26 14:36:04 crc kubenswrapper[4809]: I0226 14:36:04.532557 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958" (OuterVolumeSpecName: "kube-api-access-bc958") pod "5eb73f4b-6f13-4340-a250-fd39e979a4e3" (UID: "5eb73f4b-6f13-4340-a250-fd39e979a4e3"). InnerVolumeSpecName "kube-api-access-bc958". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:36:04 crc kubenswrapper[4809]: I0226 14:36:04.630666 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc958\" (UniqueName: \"kubernetes.io/projected/5eb73f4b-6f13-4340-a250-fd39e979a4e3-kube-api-access-bc958\") on node \"crc\" DevicePath \"\"" Feb 26 14:36:05 crc kubenswrapper[4809]: I0226 14:36:05.123410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" event={"ID":"5eb73f4b-6f13-4340-a250-fd39e979a4e3","Type":"ContainerDied","Data":"032a0b25184c29ebdbb92ddd0f2e16f1d2594b7f3d67713850541f780007efb2"} Feb 26 14:36:05 crc kubenswrapper[4809]: I0226 14:36:05.123452 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032a0b25184c29ebdbb92ddd0f2e16f1d2594b7f3d67713850541f780007efb2" Feb 26 14:36:05 crc kubenswrapper[4809]: I0226 14:36:05.123524 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535276-vvq4j" Feb 26 14:36:05 crc kubenswrapper[4809]: I0226 14:36:05.463829 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-cjtxt"] Feb 26 14:36:05 crc kubenswrapper[4809]: I0226 14:36:05.471148 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535270-cjtxt"] Feb 26 14:36:06 crc kubenswrapper[4809]: I0226 14:36:06.270523 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3bae85-a2b7-4dc7-9ef4-5001ea024453" path="/var/lib/kubelet/pods/0a3bae85-a2b7-4dc7-9ef4-5001ea024453/volumes" Feb 26 14:36:11 crc kubenswrapper[4809]: I0226 14:36:11.794166 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:36:11 crc kubenswrapper[4809]: I0226 14:36:11.794702 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:36:11 crc kubenswrapper[4809]: I0226 14:36:11.794791 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:36:11 crc kubenswrapper[4809]: I0226 14:36:11.795551 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:36:11 crc kubenswrapper[4809]: I0226 14:36:11.795619 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168" gracePeriod=600 Feb 26 14:36:12 crc kubenswrapper[4809]: I0226 14:36:12.182715 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168" exitCode=0 Feb 26 14:36:12 crc kubenswrapper[4809]: I0226 14:36:12.182758 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168"} Feb 26 14:36:12 crc kubenswrapper[4809]: I0226 14:36:12.183076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0"} Feb 26 14:36:12 crc kubenswrapper[4809]: I0226 14:36:12.183097 4809 scope.go:117] "RemoveContainer" containerID="3cbc23da414fd25417954cf41e7597ad8ab5b46de123c00647a78a4df84173b9" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.652985 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff"] Feb 26 14:36:13 crc kubenswrapper[4809]: E0226 14:36:13.653699 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb73f4b-6f13-4340-a250-fd39e979a4e3" containerName="oc" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.653716 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb73f4b-6f13-4340-a250-fd39e979a4e3" containerName="oc" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.653932 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb73f4b-6f13-4340-a250-fd39e979a4e3" containerName="oc" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.654675 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.657614 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k2rs4" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.666049 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.677321 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.678653 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.681550 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2hxwq" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.687135 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.688454 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.692476 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jdhkv" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.718376 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.730280 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.757451 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.758795 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.768077 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.769770 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.775420 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rqgkw" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.780180 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.780450 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-znw8j" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.787451 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgc4\" (UniqueName: \"kubernetes.io/projected/0dc90358-78e1-4391-9b04-72fb1a0ffb6e-kube-api-access-2kgc4\") pod \"barbican-operator-controller-manager-868647ff47-7ptff\" (UID: \"0dc90358-78e1-4391-9b04-72fb1a0ffb6e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.787690 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lcs\" (UniqueName: \"kubernetes.io/projected/369ebb20-08ea-4aa4-ba33-8eecc4a208ca-kube-api-access-24lcs\") pod \"designate-operator-controller-manager-6d8bf5c495-c9sm7\" (UID: \"369ebb20-08ea-4aa4-ba33-8eecc4a208ca\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.787875 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szr2f\" (UniqueName: \"kubernetes.io/projected/51190e04-2cb1-41e9-9d62-23ef12d0edd3-kube-api-access-szr2f\") pod \"cinder-operator-controller-manager-55d77d7b5c-wmn76\" (UID: \"51190e04-2cb1-41e9-9d62-23ef12d0edd3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.788126 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.796403 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.797612 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.807146 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tw7pm" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.813251 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mvll2"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.816861 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.825426 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-728wj" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.826186 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.830612 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.842222 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mvll2"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.850675 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.852729 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.857457 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7qbzd" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.873853 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889245 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szr2f\" (UniqueName: \"kubernetes.io/projected/51190e04-2cb1-41e9-9d62-23ef12d0edd3-kube-api-access-szr2f\") pod \"cinder-operator-controller-manager-55d77d7b5c-wmn76\" (UID: \"51190e04-2cb1-41e9-9d62-23ef12d0edd3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9f6\" (UniqueName: \"kubernetes.io/projected/2130e114-53fd-4853-bd3a-df26c1c3df4a-kube-api-access-dx9f6\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgc4\" (UniqueName: \"kubernetes.io/projected/0dc90358-78e1-4391-9b04-72fb1a0ffb6e-kube-api-access-2kgc4\") pod \"barbican-operator-controller-manager-868647ff47-7ptff\" (UID: \"0dc90358-78e1-4391-9b04-72fb1a0ffb6e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lcs\" (UniqueName: \"kubernetes.io/projected/369ebb20-08ea-4aa4-ba33-8eecc4a208ca-kube-api-access-24lcs\") pod \"designate-operator-controller-manager-6d8bf5c495-c9sm7\" (UID: \"369ebb20-08ea-4aa4-ba33-8eecc4a208ca\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889424 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqlh\" (UniqueName: \"kubernetes.io/projected/a94df460-1916-4302-a528-1850277c2c68-kube-api-access-5tqlh\") pod \"heat-operator-controller-manager-69f49c598c-htlkr\" (UID: \"a94df460-1916-4302-a528-1850277c2c68\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889456 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwbf7\" (UniqueName: \"kubernetes.io/projected/452db9cf-1689-42fa-bd48-15be5d5012e4-kube-api-access-rwbf7\") pod \"glance-operator-controller-manager-784b5bb6c5-r946b\" (UID: \"452db9cf-1689-42fa-bd48-15be5d5012e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889483 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8mm\" (UniqueName: \"kubernetes.io/projected/b891860e-25ba-48f0-90f1-a9f481e661eb-kube-api-access-2f8mm\") pod \"horizon-operator-controller-manager-5b9b8895d5-psj8j\" (UID: \"b891860e-25ba-48f0-90f1-a9f481e661eb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.889520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.936179 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.952305 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.974725 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv"] Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.985044 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991329 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9f6\" (UniqueName: \"kubernetes.io/projected/2130e114-53fd-4853-bd3a-df26c1c3df4a-kube-api-access-dx9f6\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4k4\" (UniqueName: \"kubernetes.io/projected/ed2539dd-3109-42bf-9c5b-aee680db3b4f-kube-api-access-lb4k4\") pod \"ironic-operator-controller-manager-554564d7fc-tlnl6\" (UID: \"ed2539dd-3109-42bf-9c5b-aee680db3b4f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991440 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthxk\" (UniqueName: \"kubernetes.io/projected/7a66a093-3f9f-49a8-a45b-84aef0465d4e-kube-api-access-dthxk\") pod \"keystone-operator-controller-manager-b4d948c87-vb9br\" (UID: \"7a66a093-3f9f-49a8-a45b-84aef0465d4e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991521 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqlh\" (UniqueName: \"kubernetes.io/projected/a94df460-1916-4302-a528-1850277c2c68-kube-api-access-5tqlh\") pod \"heat-operator-controller-manager-69f49c598c-htlkr\" (UID: \"a94df460-1916-4302-a528-1850277c2c68\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwbf7\" (UniqueName: \"kubernetes.io/projected/452db9cf-1689-42fa-bd48-15be5d5012e4-kube-api-access-rwbf7\") pod \"glance-operator-controller-manager-784b5bb6c5-r946b\" (UID: \"452db9cf-1689-42fa-bd48-15be5d5012e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991613 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f8mm\" (UniqueName: \"kubernetes.io/projected/b891860e-25ba-48f0-90f1-a9f481e661eb-kube-api-access-2f8mm\") pod \"horizon-operator-controller-manager-5b9b8895d5-psj8j\" (UID: \"b891860e-25ba-48f0-90f1-a9f481e661eb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:13 crc kubenswrapper[4809]: I0226 14:36:13.991677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:13 crc kubenswrapper[4809]: E0226 14:36:13.991933 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:13 crc kubenswrapper[4809]: E0226 14:36:13.992000 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:14.491978083 +0000 UTC m=+1352.965298606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.001659 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ngdqh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.006049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z8nzw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.036201 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lcs\" (UniqueName: \"kubernetes.io/projected/369ebb20-08ea-4aa4-ba33-8eecc4a208ca-kube-api-access-24lcs\") pod \"designate-operator-controller-manager-6d8bf5c495-c9sm7\" (UID: \"369ebb20-08ea-4aa4-ba33-8eecc4a208ca\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.057405 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szr2f\" (UniqueName: \"kubernetes.io/projected/51190e04-2cb1-41e9-9d62-23ef12d0edd3-kube-api-access-szr2f\") pod \"cinder-operator-controller-manager-55d77d7b5c-wmn76\" (UID: \"51190e04-2cb1-41e9-9d62-23ef12d0edd3\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.045830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgc4\" (UniqueName: \"kubernetes.io/projected/0dc90358-78e1-4391-9b04-72fb1a0ffb6e-kube-api-access-2kgc4\") pod \"barbican-operator-controller-manager-868647ff47-7ptff\" (UID: \"0dc90358-78e1-4391-9b04-72fb1a0ffb6e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.064175 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.065469 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.073453 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-b9h7b" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.062479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f8mm\" (UniqueName: \"kubernetes.io/projected/b891860e-25ba-48f0-90f1-a9f481e661eb-kube-api-access-2f8mm\") pod \"horizon-operator-controller-manager-5b9b8895d5-psj8j\" (UID: \"b891860e-25ba-48f0-90f1-a9f481e661eb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.083299 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwbf7\" (UniqueName: \"kubernetes.io/projected/452db9cf-1689-42fa-bd48-15be5d5012e4-kube-api-access-rwbf7\") pod \"glance-operator-controller-manager-784b5bb6c5-r946b\" (UID: \"452db9cf-1689-42fa-bd48-15be5d5012e4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.083662 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.084131 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.099795 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqlh\" (UniqueName: \"kubernetes.io/projected/a94df460-1916-4302-a528-1850277c2c68-kube-api-access-5tqlh\") pod \"heat-operator-controller-manager-69f49c598c-htlkr\" (UID: \"a94df460-1916-4302-a528-1850277c2c68\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.101512 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7jvz\" (UniqueName: \"kubernetes.io/projected/f88f4170-586f-4203-8c9b-12aa0865a6be-kube-api-access-z7jvz\") pod \"mariadb-operator-controller-manager-6994f66f48-wbnwh\" (UID: \"f88f4170-586f-4203-8c9b-12aa0865a6be\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.101582 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwb6\" (UniqueName: \"kubernetes.io/projected/e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae-kube-api-access-wxwb6\") pod \"manila-operator-controller-manager-67d996989d-qnxhv\" (UID: \"e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.101661 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb4k4\" (UniqueName: \"kubernetes.io/projected/ed2539dd-3109-42bf-9c5b-aee680db3b4f-kube-api-access-lb4k4\") pod \"ironic-operator-controller-manager-554564d7fc-tlnl6\" (UID: \"ed2539dd-3109-42bf-9c5b-aee680db3b4f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.101689 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dthxk\" (UniqueName: \"kubernetes.io/projected/7a66a093-3f9f-49a8-a45b-84aef0465d4e-kube-api-access-dthxk\") pod \"keystone-operator-controller-manager-b4d948c87-vb9br\" (UID: \"7a66a093-3f9f-49a8-a45b-84aef0465d4e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.115409 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.119484 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.131665 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9f6\" (UniqueName: \"kubernetes.io/projected/2130e114-53fd-4853-bd3a-df26c1c3df4a-kube-api-access-dx9f6\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.149328 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.156478 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dthxk\" (UniqueName: \"kubernetes.io/projected/7a66a093-3f9f-49a8-a45b-84aef0465d4e-kube-api-access-dthxk\") pod \"keystone-operator-controller-manager-b4d948c87-vb9br\" (UID: \"7a66a093-3f9f-49a8-a45b-84aef0465d4e\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.159437 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.206751 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb4k4\" (UniqueName: \"kubernetes.io/projected/ed2539dd-3109-42bf-9c5b-aee680db3b4f-kube-api-access-lb4k4\") pod \"ironic-operator-controller-manager-554564d7fc-tlnl6\" (UID: \"ed2539dd-3109-42bf-9c5b-aee680db3b4f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.207384 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.207930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7jvz\" (UniqueName: \"kubernetes.io/projected/f88f4170-586f-4203-8c9b-12aa0865a6be-kube-api-access-z7jvz\") pod \"mariadb-operator-controller-manager-6994f66f48-wbnwh\" (UID: \"f88f4170-586f-4203-8c9b-12aa0865a6be\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.208195 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwb6\" (UniqueName: \"kubernetes.io/projected/e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae-kube-api-access-wxwb6\") pod \"manila-operator-controller-manager-67d996989d-qnxhv\" (UID: \"e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.208677 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.223173 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.224379 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.224879 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-82q4j" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.228485 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-4gpvn" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.280817 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.297999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7jvz\" (UniqueName: \"kubernetes.io/projected/f88f4170-586f-4203-8c9b-12aa0865a6be-kube-api-access-z7jvz\") pod \"mariadb-operator-controller-manager-6994f66f48-wbnwh\" (UID: \"f88f4170-586f-4203-8c9b-12aa0865a6be\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.302590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwb6\" (UniqueName: \"kubernetes.io/projected/e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae-kube-api-access-wxwb6\") pod \"manila-operator-controller-manager-67d996989d-qnxhv\" (UID: \"e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.303061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.313108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrpnb\" (UniqueName: \"kubernetes.io/projected/00bdb1ef-c56b-4abe-b491-9c24a8f9089d-kube-api-access-rrpnb\") pod \"nova-operator-controller-manager-567668f5cf-jsjcz\" (UID: \"00bdb1ef-c56b-4abe-b491-9c24a8f9089d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.313312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flchr\" (UniqueName: \"kubernetes.io/projected/d05f3883-4b90-4b5d-94b2-b7e916a66ed6-kube-api-access-flchr\") pod \"neutron-operator-controller-manager-6bd4687957-llxf9\" (UID: \"d05f3883-4b90-4b5d-94b2-b7e916a66ed6\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.323537 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.332485 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.374365 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.375619 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.375654 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.375665 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.375680 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.376580 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.377283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.393114 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.393393 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gcgzs" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.393556 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2fj62" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.431426 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.432745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flchr\" (UniqueName: \"kubernetes.io/projected/d05f3883-4b90-4b5d-94b2-b7e916a66ed6-kube-api-access-flchr\") pod \"neutron-operator-controller-manager-6bd4687957-llxf9\" (UID: \"d05f3883-4b90-4b5d-94b2-b7e916a66ed6\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.432815 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9c9\" (UniqueName: \"kubernetes.io/projected/ed3d7dc0-026c-4ed5-b816-b0249300c743-kube-api-access-xb9c9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.432900 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdvs\" (UniqueName: \"kubernetes.io/projected/db50276a-5e85-4edb-9538-0b42201fbe74-kube-api-access-5jdvs\") pod \"octavia-operator-controller-manager-659dc6bbfc-xcrth\" (UID: \"db50276a-5e85-4edb-9538-0b42201fbe74\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.432952 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.433199 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrpnb\" (UniqueName: \"kubernetes.io/projected/00bdb1ef-c56b-4abe-b491-9c24a8f9089d-kube-api-access-rrpnb\") pod \"nova-operator-controller-manager-567668f5cf-jsjcz\" (UID: \"00bdb1ef-c56b-4abe-b491-9c24a8f9089d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.435825 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.436788 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.437250 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.439899 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.444068 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-59d4h" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.444303 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zwjxn" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.452526 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.461371 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.470515 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.473835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flchr\" (UniqueName: \"kubernetes.io/projected/d05f3883-4b90-4b5d-94b2-b7e916a66ed6-kube-api-access-flchr\") pod \"neutron-operator-controller-manager-6bd4687957-llxf9\" (UID: \"d05f3883-4b90-4b5d-94b2-b7e916a66ed6\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.480087 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrpnb\" (UniqueName: \"kubernetes.io/projected/00bdb1ef-c56b-4abe-b491-9c24a8f9089d-kube-api-access-rrpnb\") pod \"nova-operator-controller-manager-567668f5cf-jsjcz\" (UID: \"00bdb1ef-c56b-4abe-b491-9c24a8f9089d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.480163 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.487991 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.489311 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.491232 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.494971 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.506421 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.510001 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pd9m8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.518544 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2gxnk" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.529190 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.535979 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540703 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9c9\" (UniqueName: \"kubernetes.io/projected/ed3d7dc0-026c-4ed5-b816-b0249300c743-kube-api-access-xb9c9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lsp\" (UniqueName: \"kubernetes.io/projected/2b26231b-2e6e-4484-8014-6dcf40d06f40-kube-api-access-j4lsp\") pod \"telemetry-operator-controller-manager-57dc789b66-zjvhb\" (UID: \"2b26231b-2e6e-4484-8014-6dcf40d06f40\") " pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7snd\" (UniqueName: \"kubernetes.io/projected/6f049af1-526c-496e-a9af-4066b69ed359-kube-api-access-h7snd\") pod \"placement-operator-controller-manager-8497b45c89-25wlm\" (UID: \"6f049af1-526c-496e-a9af-4066b69ed359\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540801 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdvs\" (UniqueName: \"kubernetes.io/projected/db50276a-5e85-4edb-9538-0b42201fbe74-kube-api-access-5jdvs\") pod \"octavia-operator-controller-manager-659dc6bbfc-xcrth\" (UID: \"db50276a-5e85-4edb-9538-0b42201fbe74\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540839 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.540905 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hjmr\" (UniqueName: \"kubernetes.io/projected/957002f1-5ca4-484b-b664-b7b563257915-kube-api-access-5hjmr\") pod \"ovn-operator-controller-manager-5955d8c787-b24kw\" (UID: \"957002f1-5ca4-484b-b664-b7b563257915\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.541033 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkdl\" (UniqueName: \"kubernetes.io/projected/15986ded-5e26-4bcc-bf72-ee349431961a-kube-api-access-6pkdl\") pod \"swift-operator-controller-manager-68f46476f-f6n9h\" (UID: \"15986ded-5e26-4bcc-bf72-ee349431961a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.541192 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.541243 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:15.541226322 +0000 UTC m=+1354.014546845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.541945 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.541986 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:15.041975503 +0000 UTC m=+1353.515296026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.550645 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.560784 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.581500 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.588057 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.592621 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7gkvd" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.607324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdvs\" (UniqueName: \"kubernetes.io/projected/db50276a-5e85-4edb-9538-0b42201fbe74-kube-api-access-5jdvs\") pod \"octavia-operator-controller-manager-659dc6bbfc-xcrth\" (UID: \"db50276a-5e85-4edb-9538-0b42201fbe74\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.612380 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.621074 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.622230 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.625471 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-lr28j" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.627516 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9c9\" (UniqueName: \"kubernetes.io/projected/ed3d7dc0-026c-4ed5-b816-b0249300c743-kube-api-access-xb9c9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.642307 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643449 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hjmr\" (UniqueName: \"kubernetes.io/projected/957002f1-5ca4-484b-b664-b7b563257915-kube-api-access-5hjmr\") pod \"ovn-operator-controller-manager-5955d8c787-b24kw\" (UID: \"957002f1-5ca4-484b-b664-b7b563257915\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643511 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkdl\" (UniqueName: \"kubernetes.io/projected/15986ded-5e26-4bcc-bf72-ee349431961a-kube-api-access-6pkdl\") pod \"swift-operator-controller-manager-68f46476f-f6n9h\" (UID: \"15986ded-5e26-4bcc-bf72-ee349431961a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh6d4\" (UniqueName: \"kubernetes.io/projected/2c068d1c-3f6c-49a3-bf65-d29b68c5ad11-kube-api-access-hh6d4\") pod \"test-operator-controller-manager-5dc6794d5b-w2zff\" (UID: \"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643645 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjm2\" (UniqueName: \"kubernetes.io/projected/5ba0d806-2bcd-45f1-b529-36ed243d775b-kube-api-access-gjjm2\") pod \"watcher-operator-controller-manager-bccc79885-dhrj6\" (UID: \"5ba0d806-2bcd-45f1-b529-36ed243d775b\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643711 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4lsp\" (UniqueName: \"kubernetes.io/projected/2b26231b-2e6e-4484-8014-6dcf40d06f40-kube-api-access-j4lsp\") pod \"telemetry-operator-controller-manager-57dc789b66-zjvhb\" (UID: \"2b26231b-2e6e-4484-8014-6dcf40d06f40\") " pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.643761 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7snd\" (UniqueName: \"kubernetes.io/projected/6f049af1-526c-496e-a9af-4066b69ed359-kube-api-access-h7snd\") pod \"placement-operator-controller-manager-8497b45c89-25wlm\" (UID: \"6f049af1-526c-496e-a9af-4066b69ed359\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.696868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkdl\" (UniqueName: \"kubernetes.io/projected/15986ded-5e26-4bcc-bf72-ee349431961a-kube-api-access-6pkdl\") pod \"swift-operator-controller-manager-68f46476f-f6n9h\" (UID: \"15986ded-5e26-4bcc-bf72-ee349431961a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.703213 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7snd\" (UniqueName: \"kubernetes.io/projected/6f049af1-526c-496e-a9af-4066b69ed359-kube-api-access-h7snd\") pod \"placement-operator-controller-manager-8497b45c89-25wlm\" (UID: \"6f049af1-526c-496e-a9af-4066b69ed359\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.713991 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4lsp\" (UniqueName: \"kubernetes.io/projected/2b26231b-2e6e-4484-8014-6dcf40d06f40-kube-api-access-j4lsp\") pod \"telemetry-operator-controller-manager-57dc789b66-zjvhb\" (UID: \"2b26231b-2e6e-4484-8014-6dcf40d06f40\") " pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.724064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hjmr\" (UniqueName: \"kubernetes.io/projected/957002f1-5ca4-484b-b664-b7b563257915-kube-api-access-5hjmr\") pod \"ovn-operator-controller-manager-5955d8c787-b24kw\" (UID: \"957002f1-5ca4-484b-b664-b7b563257915\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.752350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh6d4\" (UniqueName: \"kubernetes.io/projected/2c068d1c-3f6c-49a3-bf65-d29b68c5ad11-kube-api-access-hh6d4\") pod \"test-operator-controller-manager-5dc6794d5b-w2zff\" (UID: \"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.752850 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjm2\" (UniqueName: \"kubernetes.io/projected/5ba0d806-2bcd-45f1-b529-36ed243d775b-kube-api-access-gjjm2\") pod \"watcher-operator-controller-manager-bccc79885-dhrj6\" (UID: \"5ba0d806-2bcd-45f1-b529-36ed243d775b\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.778289 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjm2\" (UniqueName: \"kubernetes.io/projected/5ba0d806-2bcd-45f1-b529-36ed243d775b-kube-api-access-gjjm2\") pod \"watcher-operator-controller-manager-bccc79885-dhrj6\" (UID: \"5ba0d806-2bcd-45f1-b529-36ed243d775b\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.778290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh6d4\" (UniqueName: \"kubernetes.io/projected/2c068d1c-3f6c-49a3-bf65-d29b68c5ad11-kube-api-access-hh6d4\") pod \"test-operator-controller-manager-5dc6794d5b-w2zff\" (UID: \"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.779599 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.780849 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.784224 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.784386 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hhlq9" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.784826 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.787341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.789209 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.806864 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.811839 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.848750 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.855329 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.855602 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfwh\" (UniqueName: \"kubernetes.io/projected/3e30fc60-012b-4a56-9cf0-56ff13e835d4-kube-api-access-4nfwh\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.855743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.857509 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.875126 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.896646 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.898549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.910588 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8vbbp" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.923841 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl"] Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.927297 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.957654 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfwh\" (UniqueName: \"kubernetes.io/projected/3e30fc60-012b-4a56-9cf0-56ff13e835d4-kube-api-access-4nfwh\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.957737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j95v\" (UniqueName: \"kubernetes.io/projected/1aebc8ba-eb1d-49a1-843b-3634bbbd4556-kube-api-access-2j95v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pkzl\" (UID: \"1aebc8ba-eb1d-49a1-843b-3634bbbd4556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.957774 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.957864 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.958043 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.958090 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:15.458075938 +0000 UTC m=+1353.931396461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.958433 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: E0226 14:36:14.958467 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:15.458457969 +0000 UTC m=+1353.931778492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "webhook-server-cert" not found Feb 26 14:36:14 crc kubenswrapper[4809]: I0226 14:36:14.984280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfwh\" (UniqueName: \"kubernetes.io/projected/3e30fc60-012b-4a56-9cf0-56ff13e835d4-kube-api-access-4nfwh\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.062130 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j95v\" (UniqueName: \"kubernetes.io/projected/1aebc8ba-eb1d-49a1-843b-3634bbbd4556-kube-api-access-2j95v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pkzl\" (UID: \"1aebc8ba-eb1d-49a1-843b-3634bbbd4556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.062254 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.062505 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.062568 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:16.062550707 +0000 UTC m=+1354.535871230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.088943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j95v\" (UniqueName: \"kubernetes.io/projected/1aebc8ba-eb1d-49a1-843b-3634bbbd4556-kube-api-access-2j95v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7pkzl\" (UID: \"1aebc8ba-eb1d-49a1-843b-3634bbbd4556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.169293 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b"] Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.244481 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" event={"ID":"452db9cf-1689-42fa-bd48-15be5d5012e4","Type":"ContainerStarted","Data":"a9fec78cef54ec33a7cdc447685799c7fd05a3bf843cb39fcf550d7a169cb76d"} Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.247681 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.469177 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.469341 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.469538 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.469605 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:16.469584594 +0000 UTC m=+1354.942905117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.470083 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.470128 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:16.470117479 +0000 UTC m=+1354.943438002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.569556 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr"] Feb 26 14:36:15 crc kubenswrapper[4809]: I0226 14:36:15.570519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.570715 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: E0226 14:36:15.570798 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:17.57077513 +0000 UTC m=+1356.044095703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:15 crc kubenswrapper[4809]: W0226 14:36:15.577391 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94df460_1916_4302_a528_1850277c2c68.slice/crio-9fb91cd8236c7a7a9a736df59018363acf4c781861eb5141368323c7ac874437 WatchSource:0}: Error finding container 9fb91cd8236c7a7a9a736df59018363acf4c781861eb5141368323c7ac874437: Status 404 returned error can't find the container with id 9fb91cd8236c7a7a9a736df59018363acf4c781861eb5141368323c7ac874437 Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.079601 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.079836 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.079938 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:18.079914209 +0000 UTC m=+1356.553234742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.198349 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.218091 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.235357 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.243146 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.252449 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76"] Feb 26 14:36:16 crc kubenswrapper[4809]: W0226 14:36:16.255486 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb891860e_25ba_48f0_90f1_a9f481e661eb.slice/crio-189f4b001ecc09f833af3d84166ae549243c0a5f417bf2166815903d5bd6f27e WatchSource:0}: Error finding container 189f4b001ecc09f833af3d84166ae549243c0a5f417bf2166815903d5bd6f27e: Status 404 returned error can't find the container with id 189f4b001ecc09f833af3d84166ae549243c0a5f417bf2166815903d5bd6f27e Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.293474 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.296681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" event={"ID":"a94df460-1916-4302-a528-1850277c2c68","Type":"ContainerStarted","Data":"9fb91cd8236c7a7a9a736df59018363acf4c781861eb5141368323c7ac874437"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.296709 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.296727 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff"] Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.314062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" event={"ID":"e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae","Type":"ContainerStarted","Data":"2a854a5b273f474a3a968916d49364e40a9519d4bef2144531b1b0f5ce08f828"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.330302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" event={"ID":"51190e04-2cb1-41e9-9d62-23ef12d0edd3","Type":"ContainerStarted","Data":"8c1aed3cadaae5961d3492ccb571c55f8034ccaf8afbaa316d0dc29305f358f4"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.336204 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" event={"ID":"7a66a093-3f9f-49a8-a45b-84aef0465d4e","Type":"ContainerStarted","Data":"29ca15094e58b536037e16d3d2328db5922457125e316791100c2e76094de42c"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.342339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" event={"ID":"f88f4170-586f-4203-8c9b-12aa0865a6be","Type":"ContainerStarted","Data":"1427ad05bba24abe456c4be621a9686cabd47d4333af8c4d6c32c52f6586d38d"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.350461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" event={"ID":"369ebb20-08ea-4aa4-ba33-8eecc4a208ca","Type":"ContainerStarted","Data":"836bd106a8bf98ca1737baacf0322abe9a6ed49ef9ac765011ba3b5d6eb1d7a2"} Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.491936 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:16 crc kubenswrapper[4809]: I0226 14:36:16.492078 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.492158 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.492230 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.492246 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:18.492221125 +0000 UTC m=+1356.965541648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:16 crc kubenswrapper[4809]: E0226 14:36:16.492289 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:18.492269516 +0000 UTC m=+1356.965590109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "webhook-server-cert" not found Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.006273 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.019620 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.039504 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.056685 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm"] Feb 26 14:36:17 crc kubenswrapper[4809]: E0226 14:36:17.067308 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hh6d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-w2zff_openstack-operators(2c068d1c-3f6c-49a3-bf65-d29b68c5ad11): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 14:36:17 crc kubenswrapper[4809]: E0226 14:36:17.068459 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.086030 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.122248 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.132101 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.133876 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.140728 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.145652 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl"] Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.390445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" event={"ID":"d05f3883-4b90-4b5d-94b2-b7e916a66ed6","Type":"ContainerStarted","Data":"fd5a445dd64efe43568de0780c819ff682088f3761ea8ef799277945c9a8c638"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.397029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" event={"ID":"957002f1-5ca4-484b-b664-b7b563257915","Type":"ContainerStarted","Data":"e023c17a8b968f45364e3adad56df4dbe23d5af344a28f4c1535637ceca00d59"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.404176 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" event={"ID":"0dc90358-78e1-4391-9b04-72fb1a0ffb6e","Type":"ContainerStarted","Data":"b910662920e4c39c1cace20e4964eb29c85f5e7a07b752e2e0414e2ad366170d"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.421303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" event={"ID":"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11","Type":"ContainerStarted","Data":"ef2b14e8530417171dbc30060500c6fb1a6d9de9609400f61c7f5127b6ebbe29"} Feb 26 14:36:17 crc kubenswrapper[4809]: E0226 14:36:17.432973 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.436955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" event={"ID":"5ba0d806-2bcd-45f1-b529-36ed243d775b","Type":"ContainerStarted","Data":"8ce10aa0221827de15c4f7cc7fcb4d911388ff0c9f02d06e26a6c3bc76cb0091"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.469255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" event={"ID":"b891860e-25ba-48f0-90f1-a9f481e661eb","Type":"ContainerStarted","Data":"189f4b001ecc09f833af3d84166ae549243c0a5f417bf2166815903d5bd6f27e"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.495415 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" event={"ID":"db50276a-5e85-4edb-9538-0b42201fbe74","Type":"ContainerStarted","Data":"756c77bb47dc1f498f7b03a431362b94f66f302383d26b12bd144468f9e0c78d"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.517771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" event={"ID":"ed2539dd-3109-42bf-9c5b-aee680db3b4f","Type":"ContainerStarted","Data":"381b88ddb20fdb4eae2e44dbc9814bff1fde6f51cbb0e5b16c38bc1b98ca4941"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.535596 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" event={"ID":"6f049af1-526c-496e-a9af-4066b69ed359","Type":"ContainerStarted","Data":"1398f5dd38919540e1700dddf6574f616bf5b1314ee99026294fe2e633f78555"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.541184 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" event={"ID":"2b26231b-2e6e-4484-8014-6dcf40d06f40","Type":"ContainerStarted","Data":"93849a1ab7db6cc12863895bb874bf2d461c533160b7f0485a7b230e8e304e7a"} Feb 26 14:36:17 crc kubenswrapper[4809]: I0226 14:36:17.630447 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:17 crc kubenswrapper[4809]: E0226 14:36:17.630657 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:17 crc kubenswrapper[4809]: E0226 14:36:17.630751 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:21.63072719 +0000 UTC m=+1360.104047763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: I0226 14:36:18.147551 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.147789 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.147907 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:22.147856436 +0000 UTC m=+1360.621176959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.558654 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" Feb 26 14:36:18 crc kubenswrapper[4809]: I0226 14:36:18.559972 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:18 crc kubenswrapper[4809]: I0226 14:36:18.560172 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.560398 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.560449 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:22.560432001 +0000 UTC m=+1361.033752524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.560498 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 14:36:18 crc kubenswrapper[4809]: E0226 14:36:18.560525 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:22.560516593 +0000 UTC m=+1361.033837126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "webhook-server-cert" not found Feb 26 14:36:21 crc kubenswrapper[4809]: I0226 14:36:21.631772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:21 crc kubenswrapper[4809]: E0226 14:36:21.631993 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:21 crc kubenswrapper[4809]: E0226 14:36:21.632118 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:29.632082042 +0000 UTC m=+1368.105402565 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: I0226 14:36:22.243471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.243674 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.243969 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:30.24394632 +0000 UTC m=+1368.717266853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: I0226 14:36:22.567813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:22 crc kubenswrapper[4809]: I0226 14:36:22.568567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.568807 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.568876 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:30.568857533 +0000 UTC m=+1369.042178056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.569388 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 14:36:22 crc kubenswrapper[4809]: E0226 14:36:22.569429 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:30.569419629 +0000 UTC m=+1369.042740152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "webhook-server-cert" not found Feb 26 14:36:27 crc kubenswrapper[4809]: E0226 14:36:27.926835 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 26 14:36:27 crc kubenswrapper[4809]: E0226 14:36:27.927549 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szr2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-wmn76_openstack-operators(51190e04-2cb1-41e9-9d62-23ef12d0edd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:27 crc kubenswrapper[4809]: E0226 14:36:27.928760 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podUID="51190e04-2cb1-41e9-9d62-23ef12d0edd3" Feb 26 14:36:28 crc kubenswrapper[4809]: E0226 14:36:28.304596 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 26 14:36:28 crc kubenswrapper[4809]: E0226 14:36:28.304766 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f8mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-psj8j_openstack-operators(b891860e-25ba-48f0-90f1-a9f481e661eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:28 crc kubenswrapper[4809]: E0226 14:36:28.305968 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" podUID="b891860e-25ba-48f0-90f1-a9f481e661eb" Feb 26 14:36:28 crc kubenswrapper[4809]: E0226 14:36:28.654555 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" podUID="b891860e-25ba-48f0-90f1-a9f481e661eb" Feb 26 14:36:28 crc kubenswrapper[4809]: E0226 14:36:28.669093 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podUID="51190e04-2cb1-41e9-9d62-23ef12d0edd3" Feb 26 14:36:29 crc kubenswrapper[4809]: I0226 14:36:29.707176 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:29 crc kubenswrapper[4809]: E0226 14:36:29.707407 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:29 crc kubenswrapper[4809]: E0226 14:36:29.707504 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert podName:2130e114-53fd-4853-bd3a-df26c1c3df4a nodeName:}" failed. No retries permitted until 2026-02-26 14:36:45.707476121 +0000 UTC m=+1384.180796654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert") pod "infra-operator-controller-manager-79d975b745-mvll2" (UID: "2130e114-53fd-4853-bd3a-df26c1c3df4a") : secret "infra-operator-webhook-server-cert" not found Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.318161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.318413 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.318508 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert podName:ed3d7dc0-026c-4ed5-b816-b0249300c743 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:46.318479585 +0000 UTC m=+1384.791800108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" (UID: "ed3d7dc0-026c-4ed5-b816-b0249300c743") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 14:36:30 crc kubenswrapper[4809]: W0226 14:36:30.444323 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bdb1ef_c56b_4abe_b491_9c24a8f9089d.slice/crio-8d000d408f2f8a25d9fee7a4855c4c05f6718509580effb09921a0f58c3b6aaf WatchSource:0}: Error finding container 8d000d408f2f8a25d9fee7a4855c4c05f6718509580effb09921a0f58c3b6aaf: Status 404 returned error can't find the container with id 8d000d408f2f8a25d9fee7a4855c4c05f6718509580effb09921a0f58c3b6aaf Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.622507 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.622635 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.622801 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.622895 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs podName:3e30fc60-012b-4a56-9cf0-56ff13e835d4 nodeName:}" failed. No retries permitted until 2026-02-26 14:36:46.622875795 +0000 UTC m=+1385.096196378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs") pod "openstack-operator-controller-manager-5fc9897686-rt5g8" (UID: "3e30fc60-012b-4a56-9cf0-56ff13e835d4") : secret "metrics-server-cert" not found Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.628862 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-webhook-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.673661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" event={"ID":"00bdb1ef-c56b-4abe-b491-9c24a8f9089d","Type":"ContainerStarted","Data":"8d000d408f2f8a25d9fee7a4855c4c05f6718509580effb09921a0f58c3b6aaf"} Feb 26 14:36:30 crc kubenswrapper[4809]: I0226 14:36:30.676401 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" event={"ID":"15986ded-5e26-4bcc-bf72-ee349431961a","Type":"ContainerStarted","Data":"c4fc8356b73ff6cbe35cb3bd2cd08127f8bc22baf1cd201c2d78b790cfaec9ad"} Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.891628 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.891817 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kgc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-7ptff_openstack-operators(0dc90358-78e1-4391-9b04-72fb1a0ffb6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:30 crc kubenswrapper[4809]: E0226 14:36:30.893140 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podUID="0dc90358-78e1-4391-9b04-72fb1a0ffb6e" Feb 26 14:36:31 crc kubenswrapper[4809]: E0226 14:36:31.686091 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podUID="0dc90358-78e1-4391-9b04-72fb1a0ffb6e" Feb 26 14:36:32 crc kubenswrapper[4809]: W0226 14:36:32.164970 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aebc8ba_eb1d_49a1_843b_3634bbbd4556.slice/crio-afd0a1dd8e73861de0a69c8433d2b0860cb51a06b943c72e3178c1eb794adbda WatchSource:0}: Error finding container afd0a1dd8e73861de0a69c8433d2b0860cb51a06b943c72e3178c1eb794adbda: Status 404 returned error can't find the container with id afd0a1dd8e73861de0a69c8433d2b0860cb51a06b943c72e3178c1eb794adbda Feb 26 14:36:32 crc kubenswrapper[4809]: I0226 14:36:32.703530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" event={"ID":"1aebc8ba-eb1d-49a1-843b-3634bbbd4556","Type":"ContainerStarted","Data":"afd0a1dd8e73861de0a69c8433d2b0860cb51a06b943c72e3178c1eb794adbda"} Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.095046 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.095288 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7jvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-wbnwh_openstack-operators(f88f4170-586f-4203-8c9b-12aa0865a6be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.096700 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" podUID="f88f4170-586f-4203-8c9b-12aa0865a6be" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.192886 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970: Get \"http://38.129.56.82:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970\": context canceled" image="38.129.56.82:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.192942 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970: Get \"http://38.129.56.82:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970\": context canceled" image="38.129.56.82:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.193090 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.82:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4lsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-57dc789b66-zjvhb_openstack-operators(2b26231b-2e6e-4484-8014-6dcf40d06f40): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970: Get \"http://38.129.56.82:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970\": context canceled" logger="UnhandledError" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.194265 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970: Get \\\"http://38.129.56.82:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:26256c8b0b1e9e9c93c1e11718c3b4619a9b8128fab0494b45856c5fc83a8970\\\": context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.712853 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.82:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" Feb 26 14:36:33 crc kubenswrapper[4809]: E0226 14:36:33.713005 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" podUID="f88f4170-586f-4203-8c9b-12aa0865a6be" Feb 26 14:36:36 crc kubenswrapper[4809]: E0226 14:36:36.554319 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 26 14:36:36 crc kubenswrapper[4809]: E0226 14:36:36.554806 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-flchr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-llxf9_openstack-operators(d05f3883-4b90-4b5d-94b2-b7e916a66ed6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:36 crc kubenswrapper[4809]: E0226 14:36:36.556042 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podUID="d05f3883-4b90-4b5d-94b2-b7e916a66ed6" Feb 26 14:36:36 crc kubenswrapper[4809]: E0226 14:36:36.748268 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podUID="d05f3883-4b90-4b5d-94b2-b7e916a66ed6" Feb 26 14:36:38 crc kubenswrapper[4809]: E0226 14:36:38.950374 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 14:36:38 crc kubenswrapper[4809]: E0226 14:36:38.950951 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dthxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-vb9br_openstack-operators(7a66a093-3f9f-49a8-a45b-84aef0465d4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:38 crc kubenswrapper[4809]: E0226 14:36:38.952326 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podUID="7a66a093-3f9f-49a8-a45b-84aef0465d4e" Feb 26 14:36:39 crc kubenswrapper[4809]: E0226 14:36:39.742723 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 26 14:36:39 crc kubenswrapper[4809]: E0226 14:36:39.743214 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jdvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-xcrth_openstack-operators(db50276a-5e85-4edb-9538-0b42201fbe74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:36:39 crc kubenswrapper[4809]: E0226 14:36:39.744353 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" Feb 26 14:36:39 crc kubenswrapper[4809]: E0226 14:36:39.774398 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podUID="7a66a093-3f9f-49a8-a45b-84aef0465d4e" Feb 26 14:36:39 crc kubenswrapper[4809]: E0226 14:36:39.774466 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" Feb 26 14:36:45 crc kubenswrapper[4809]: I0226 14:36:45.720418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:45 crc kubenswrapper[4809]: I0226 14:36:45.729623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2130e114-53fd-4853-bd3a-df26c1c3df4a-cert\") pod \"infra-operator-controller-manager-79d975b745-mvll2\" (UID: \"2130e114-53fd-4853-bd3a-df26c1c3df4a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:45 crc kubenswrapper[4809]: I0226 14:36:45.968920 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-728wj" Feb 26 14:36:45 crc kubenswrapper[4809]: I0226 14:36:45.977948 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.340859 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.345380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed3d7dc0-026c-4ed5-b816-b0249300c743-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6\" (UID: \"ed3d7dc0-026c-4ed5-b816-b0249300c743\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.526394 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2fj62" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.534001 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.646574 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.649717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e30fc60-012b-4a56-9cf0-56ff13e835d4-metrics-certs\") pod \"openstack-operator-controller-manager-5fc9897686-rt5g8\" (UID: \"3e30fc60-012b-4a56-9cf0-56ff13e835d4\") " pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.707398 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hhlq9" Feb 26 14:36:46 crc kubenswrapper[4809]: I0226 14:36:46.716117 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.884358 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" event={"ID":"e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae","Type":"ContainerStarted","Data":"639d517750ee22ae5c1bb037727d76f122b29194c2cfed9dcc2a1ffd2930d0f5"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.886511 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.909096 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" event={"ID":"452db9cf-1689-42fa-bd48-15be5d5012e4","Type":"ContainerStarted","Data":"9d3c1fc98511222252ea57b55fe1011a80403db0b08916cf3d65c59b5d08841a"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.910201 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.911423 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" event={"ID":"ed2539dd-3109-42bf-9c5b-aee680db3b4f","Type":"ContainerStarted","Data":"0c5e06a91c4814d0bc719daf7863fb6724213c2a2710995e389546dd8cc06e9f"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.911897 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.936393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" event={"ID":"6f049af1-526c-496e-a9af-4066b69ed359","Type":"ContainerStarted","Data":"a7ffda18692a1d129f8311be089c759ed03130135e65ead924aaa1780cecbe29"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.937334 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.943398 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" event={"ID":"369ebb20-08ea-4aa4-ba33-8eecc4a208ca","Type":"ContainerStarted","Data":"cd2e404adcdd4f01fc5f2aae5e8fd2288b590d731e7ea16feeabdbdd163e3f33"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.944616 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.945954 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" event={"ID":"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11","Type":"ContainerStarted","Data":"4d9e98f8d2c15ec59864808b19333469bf19a162a7c747aba69960892ad7748a"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.946381 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.970459 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" podStartSLOduration=8.649028083 podStartE2EDuration="35.97043522s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.242689275 +0000 UTC m=+1354.716009798" lastFinishedPulling="2026-02-26 14:36:43.564096412 +0000 UTC m=+1382.037416935" observedRunningTime="2026-02-26 14:36:48.937552786 +0000 UTC m=+1387.410873319" watchObservedRunningTime="2026-02-26 14:36:48.97043522 +0000 UTC m=+1387.443755743" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.980812 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" event={"ID":"a94df460-1916-4302-a528-1850277c2c68","Type":"ContainerStarted","Data":"cffae942816fe6bbeaa190a1ec97a6bc452e9eb5bb620257b9b377656c0e7a81"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.982091 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.985079 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" event={"ID":"5ba0d806-2bcd-45f1-b529-36ed243d775b","Type":"ContainerStarted","Data":"de554ca35274cce817eb9a42e010310bb51bf049a6f8310ce8cfec875cf4960e"} Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.985864 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:48 crc kubenswrapper[4809]: I0226 14:36:48.992361 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" podStartSLOduration=13.910295758 podStartE2EDuration="35.992338852s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:15.201110264 +0000 UTC m=+1353.674430787" lastFinishedPulling="2026-02-26 14:36:37.283153348 +0000 UTC m=+1375.756473881" observedRunningTime="2026-02-26 14:36:48.976530283 +0000 UTC m=+1387.449850816" watchObservedRunningTime="2026-02-26 14:36:48.992338852 +0000 UTC m=+1387.465659375" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.013445 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" podStartSLOduration=4.25217407 podStartE2EDuration="35.013424122s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.059557628 +0000 UTC m=+1355.532878151" lastFinishedPulling="2026-02-26 14:36:47.82080768 +0000 UTC m=+1386.294128203" observedRunningTime="2026-02-26 14:36:49.013326039 +0000 UTC m=+1387.486646562" watchObservedRunningTime="2026-02-26 14:36:49.013424122 +0000 UTC m=+1387.486744635" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.082780 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" podStartSLOduration=5.885433975 podStartE2EDuration="36.082737321s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.037943874 +0000 UTC m=+1355.511264397" lastFinishedPulling="2026-02-26 14:36:47.23524722 +0000 UTC m=+1385.708567743" observedRunningTime="2026-02-26 14:36:49.040324856 +0000 UTC m=+1387.513645379" watchObservedRunningTime="2026-02-26 14:36:49.082737321 +0000 UTC m=+1387.556057854" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.107768 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" podStartSLOduration=4.318585856 podStartE2EDuration="35.107747242s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.066119654 +0000 UTC m=+1355.539440177" lastFinishedPulling="2026-02-26 14:36:47.85528104 +0000 UTC m=+1386.328601563" observedRunningTime="2026-02-26 14:36:49.064544284 +0000 UTC m=+1387.537864807" watchObservedRunningTime="2026-02-26 14:36:49.107747242 +0000 UTC m=+1387.581067765" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.115843 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" podStartSLOduration=8.796753981 podStartE2EDuration="36.115823492s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.244890137 +0000 UTC m=+1354.718210660" lastFinishedPulling="2026-02-26 14:36:43.563959638 +0000 UTC m=+1382.037280171" observedRunningTime="2026-02-26 14:36:49.10836383 +0000 UTC m=+1387.581684353" watchObservedRunningTime="2026-02-26 14:36:49.115823492 +0000 UTC m=+1387.589144005" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.183820 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podStartSLOduration=4.429652074 podStartE2EDuration="35.183800444s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.067191745 +0000 UTC m=+1355.540512268" lastFinishedPulling="2026-02-26 14:36:47.821340115 +0000 UTC m=+1386.294660638" observedRunningTime="2026-02-26 14:36:49.171897225 +0000 UTC m=+1387.645217748" watchObservedRunningTime="2026-02-26 14:36:49.183800444 +0000 UTC m=+1387.657120967" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.236579 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" podStartSLOduration=8.251942459 podStartE2EDuration="36.236558033s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:15.579321443 +0000 UTC m=+1354.052641966" lastFinishedPulling="2026-02-26 14:36:43.563937007 +0000 UTC m=+1382.037257540" observedRunningTime="2026-02-26 14:36:49.231320974 +0000 UTC m=+1387.704641497" watchObservedRunningTime="2026-02-26 14:36:49.236558033 +0000 UTC m=+1387.709878556" Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.453318 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mvll2"] Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.474424 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8"] Feb 26 14:36:49 crc kubenswrapper[4809]: I0226 14:36:49.532996 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6"] Feb 26 14:36:49 crc kubenswrapper[4809]: W0226 14:36:49.547535 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3d7dc0_026c_4ed5_b816_b0249300c743.slice/crio-3262148631e6aff4915814666266abf32da6b2c0418c623bd5260517e8dbbb12 WatchSource:0}: Error finding container 3262148631e6aff4915814666266abf32da6b2c0418c623bd5260517e8dbbb12: Status 404 returned error can't find the container with id 3262148631e6aff4915814666266abf32da6b2c0418c623bd5260517e8dbbb12 Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.019386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" event={"ID":"15986ded-5e26-4bcc-bf72-ee349431961a","Type":"ContainerStarted","Data":"41758700ccc6b65dea69ee6d44a0fe286b19e6ab1324e2edc546f0cf71ed88e7"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.019803 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.034350 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" event={"ID":"3e30fc60-012b-4a56-9cf0-56ff13e835d4","Type":"ContainerStarted","Data":"c929a6e96c445915f9c631f8f9cbabb57f8df1cf3aa006b20a1f4ee3d256c84f"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.034409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" event={"ID":"3e30fc60-012b-4a56-9cf0-56ff13e835d4","Type":"ContainerStarted","Data":"c27d4364cfafbdbe5f61c6e7ac971576c3c9d4dfa9a7440d3e58985ce91f57a7"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.035223 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.047306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" event={"ID":"f88f4170-586f-4203-8c9b-12aa0865a6be","Type":"ContainerStarted","Data":"1e4eae6d55676a6e531262b4c96f0cd968baa0c191b1fc120e2e6b1d390849c5"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.048182 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.072255 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" podStartSLOduration=19.185993103 podStartE2EDuration="36.072234391s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:30.980175178 +0000 UTC m=+1369.453495701" lastFinishedPulling="2026-02-26 14:36:47.866416466 +0000 UTC m=+1386.339736989" observedRunningTime="2026-02-26 14:36:50.061450275 +0000 UTC m=+1388.534770798" watchObservedRunningTime="2026-02-26 14:36:50.072234391 +0000 UTC m=+1388.545554914" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.079463 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" event={"ID":"00bdb1ef-c56b-4abe-b491-9c24a8f9089d","Type":"ContainerStarted","Data":"ccdc94939c027b111d257e9982b36a511200daaafe942327e02648a255181f8f"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.080324 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.089397 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" event={"ID":"1aebc8ba-eb1d-49a1-843b-3634bbbd4556","Type":"ContainerStarted","Data":"ed24fbebfca2fb1ced0f78f98883390211fe10348127689b678c24e05c7395a8"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.104563 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podStartSLOduration=36.10454324 podStartE2EDuration="36.10454324s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:36:50.099748013 +0000 UTC m=+1388.573068536" watchObservedRunningTime="2026-02-26 14:36:50.10454324 +0000 UTC m=+1388.577863763" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.112168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" event={"ID":"b891860e-25ba-48f0-90f1-a9f481e661eb","Type":"ContainerStarted","Data":"89e36dbd71aa181252d882a0c238e67a9de7a961f3a673cd76f6d22dc808c2b8"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.112680 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.117772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" event={"ID":"2130e114-53fd-4853-bd3a-df26c1c3df4a","Type":"ContainerStarted","Data":"de2574c7dc70ddbc014218714cfdf7841b52b7ed0a5ffb09a4ae34fcafea1054"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.131203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" event={"ID":"51190e04-2cb1-41e9-9d62-23ef12d0edd3","Type":"ContainerStarted","Data":"ea11e5d6db588f3af5a069a2af84ca23cc365055f2c84a79ff6f32b919361661"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.131556 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.165668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" event={"ID":"957002f1-5ca4-484b-b664-b7b563257915","Type":"ContainerStarted","Data":"19860a6a65397f633b1011a51fb18259f27619b8bd9aab5cff646584f6e8f4bb"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.166726 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.179912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" event={"ID":"ed3d7dc0-026c-4ed5-b816-b0249300c743","Type":"ContainerStarted","Data":"3262148631e6aff4915814666266abf32da6b2c0418c623bd5260517e8dbbb12"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.194551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" event={"ID":"0dc90358-78e1-4391-9b04-72fb1a0ffb6e","Type":"ContainerStarted","Data":"9fdf65ce2b7a545b468d9cadff8d62c023148e22d7030135f03b3ee4ed21173e"} Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.195279 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.228410 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" podStartSLOduration=5.507398263 podStartE2EDuration="37.228386579s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.205736944 +0000 UTC m=+1354.679057467" lastFinishedPulling="2026-02-26 14:36:47.92672526 +0000 UTC m=+1386.400045783" observedRunningTime="2026-02-26 14:36:50.163263928 +0000 UTC m=+1388.636584471" watchObservedRunningTime="2026-02-26 14:36:50.228386579 +0000 UTC m=+1388.701707102" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.268536 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podStartSLOduration=5.627393604 podStartE2EDuration="37.268489519s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.225144016 +0000 UTC m=+1354.698464539" lastFinishedPulling="2026-02-26 14:36:47.866239931 +0000 UTC m=+1386.339560454" observedRunningTime="2026-02-26 14:36:50.196676818 +0000 UTC m=+1388.669997351" watchObservedRunningTime="2026-02-26 14:36:50.268489519 +0000 UTC m=+1388.741810042" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.286567 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" podStartSLOduration=20.478064352 podStartE2EDuration="36.286550922s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:32.16785568 +0000 UTC m=+1370.641176203" lastFinishedPulling="2026-02-26 14:36:47.97634224 +0000 UTC m=+1386.449662773" observedRunningTime="2026-02-26 14:36:50.226870996 +0000 UTC m=+1388.700191529" watchObservedRunningTime="2026-02-26 14:36:50.286550922 +0000 UTC m=+1388.759871445" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.321549 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" podStartSLOduration=20.479249797 podStartE2EDuration="37.321528456s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:30.980167528 +0000 UTC m=+1369.453488051" lastFinishedPulling="2026-02-26 14:36:47.822446187 +0000 UTC m=+1386.295766710" observedRunningTime="2026-02-26 14:36:50.301359603 +0000 UTC m=+1388.774680146" watchObservedRunningTime="2026-02-26 14:36:50.321528456 +0000 UTC m=+1388.794848979" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.348329 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" podStartSLOduration=5.7151309569999995 podStartE2EDuration="37.348304377s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.303283187 +0000 UTC m=+1354.776603710" lastFinishedPulling="2026-02-26 14:36:47.936456607 +0000 UTC m=+1386.409777130" observedRunningTime="2026-02-26 14:36:50.333162387 +0000 UTC m=+1388.806482910" watchObservedRunningTime="2026-02-26 14:36:50.348304377 +0000 UTC m=+1388.821624910" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.366101 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podStartSLOduration=5.610913513 podStartE2EDuration="36.366085932s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.065992451 +0000 UTC m=+1355.539312974" lastFinishedPulling="2026-02-26 14:36:47.82116486 +0000 UTC m=+1386.294485393" observedRunningTime="2026-02-26 14:36:50.364508348 +0000 UTC m=+1388.837828871" watchObservedRunningTime="2026-02-26 14:36:50.366085932 +0000 UTC m=+1388.839406465" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.380469 4809 scope.go:117] "RemoveContainer" containerID="1211139415e3c1090d3baa74500fae3b31d5855d20fe7c9c1c3336944ddca6c5" Feb 26 14:36:50 crc kubenswrapper[4809]: I0226 14:36:50.423413 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podStartSLOduration=5.817259219 podStartE2EDuration="37.423391051s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.320232568 +0000 UTC m=+1354.793553091" lastFinishedPulling="2026-02-26 14:36:47.9263644 +0000 UTC m=+1386.399684923" observedRunningTime="2026-02-26 14:36:50.416282359 +0000 UTC m=+1388.889602882" watchObservedRunningTime="2026-02-26 14:36:50.423391051 +0000 UTC m=+1388.896711574" Feb 26 14:36:51 crc kubenswrapper[4809]: I0226 14:36:51.206002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" event={"ID":"d05f3883-4b90-4b5d-94b2-b7e916a66ed6","Type":"ContainerStarted","Data":"60c844f9469e4a4ef459bf915bce93e6bdb14bba656134a8aa8aa7cfb5f6b785"} Feb 26 14:36:51 crc kubenswrapper[4809]: I0226 14:36:51.231306 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podStartSLOduration=3.672676124 podStartE2EDuration="38.23128228s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.297557514 +0000 UTC m=+1354.770878067" lastFinishedPulling="2026-02-26 14:36:50.8561637 +0000 UTC m=+1389.329484223" observedRunningTime="2026-02-26 14:36:51.224385904 +0000 UTC m=+1389.697706427" watchObservedRunningTime="2026-02-26 14:36:51.23128228 +0000 UTC m=+1389.704602803" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.086318 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.118859 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.164767 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.308263 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.337624 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.338913 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.444674 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.495906 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.564184 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.583274 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.617778 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.803579 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.851951 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.860297 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.885659 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" Feb 26 14:36:54 crc kubenswrapper[4809]: I0226 14:36:54.944827 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.245472 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" event={"ID":"2b26231b-2e6e-4484-8014-6dcf40d06f40","Type":"ContainerStarted","Data":"b87a4dd3780280f9aa299638d3d80026bcbf2d017160e4a5051af3d10c97458b"} Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.247130 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.249894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" event={"ID":"db50276a-5e85-4edb-9538-0b42201fbe74","Type":"ContainerStarted","Data":"033148a772fde4d4298a372dfca4d125ead7ea075139e271c4e8eed6c166a466"} Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.250080 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.251223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" event={"ID":"2130e114-53fd-4853-bd3a-df26c1c3df4a","Type":"ContainerStarted","Data":"641ffe48b7935d9029d25da4fe245381b29be4fef1c12a6e546e4712cb96de9d"} Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.251940 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.253317 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" event={"ID":"7a66a093-3f9f-49a8-a45b-84aef0465d4e","Type":"ContainerStarted","Data":"a0f5e05e54ba17ba576f862937e9c01a2bca3765f74a76f7b71297785b1f993d"} Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.253544 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.254409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" event={"ID":"ed3d7dc0-026c-4ed5-b816-b0249300c743","Type":"ContainerStarted","Data":"0bec3112cffeb93ad58c6d529d8b57ca2c39acea124c75ebe7328f0754b78bc5"} Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.254733 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.287392 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podStartSLOduration=3.999396145 podStartE2EDuration="42.287374805s" podCreationTimestamp="2026-02-26 14:36:14 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.045292982 +0000 UTC m=+1355.518613505" lastFinishedPulling="2026-02-26 14:36:55.333271642 +0000 UTC m=+1393.806592165" observedRunningTime="2026-02-26 14:36:56.283264318 +0000 UTC m=+1394.756584841" watchObservedRunningTime="2026-02-26 14:36:56.287374805 +0000 UTC m=+1394.760695318" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.312829 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podStartSLOduration=5.021511065 podStartE2EDuration="43.312810938s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:17.03745999 +0000 UTC m=+1355.510780513" lastFinishedPulling="2026-02-26 14:36:55.328759863 +0000 UTC m=+1393.802080386" observedRunningTime="2026-02-26 14:36:56.308647169 +0000 UTC m=+1394.781967702" watchObservedRunningTime="2026-02-26 14:36:56.312810938 +0000 UTC m=+1394.786131451" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.400428 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podStartSLOduration=37.582022008 podStartE2EDuration="43.400412837s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:49.51515544 +0000 UTC m=+1387.988475963" lastFinishedPulling="2026-02-26 14:36:55.333546249 +0000 UTC m=+1393.806866792" observedRunningTime="2026-02-26 14:36:56.347967787 +0000 UTC m=+1394.821288310" watchObservedRunningTime="2026-02-26 14:36:56.400412837 +0000 UTC m=+1394.873733360" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.400835 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podStartSLOduration=37.634145569 podStartE2EDuration="43.400829609s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:49.560376215 +0000 UTC m=+1388.033696728" lastFinishedPulling="2026-02-26 14:36:55.327060245 +0000 UTC m=+1393.800380768" observedRunningTime="2026-02-26 14:36:56.39418167 +0000 UTC m=+1394.867502193" watchObservedRunningTime="2026-02-26 14:36:56.400829609 +0000 UTC m=+1394.874150132" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.722978 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 14:36:56 crc kubenswrapper[4809]: I0226 14:36:56.754323 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podStartSLOduration=4.168586116 podStartE2EDuration="43.754303374s" podCreationTimestamp="2026-02-26 14:36:13 +0000 UTC" firstStartedPulling="2026-02-26 14:36:16.230689774 +0000 UTC m=+1354.704010297" lastFinishedPulling="2026-02-26 14:36:55.816407022 +0000 UTC m=+1394.289727555" observedRunningTime="2026-02-26 14:36:56.412945393 +0000 UTC m=+1394.886265916" watchObservedRunningTime="2026-02-26 14:36:56.754303374 +0000 UTC m=+1395.227623907" Feb 26 14:37:04 crc kubenswrapper[4809]: I0226 14:37:04.284844 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" Feb 26 14:37:04 crc kubenswrapper[4809]: I0226 14:37:04.584746 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" Feb 26 14:37:04 crc kubenswrapper[4809]: I0226 14:37:04.792842 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 14:37:04 crc kubenswrapper[4809]: I0226 14:37:04.812228 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 14:37:05 crc kubenswrapper[4809]: I0226 14:37:05.983822 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 14:37:06 crc kubenswrapper[4809]: I0226 14:37:06.541822 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.466582 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.471922 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.473554 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-95lsn" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.474360 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.474564 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.474629 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.493475 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.552300 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.554436 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.563331 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.577375 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.641644 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxh7p\" (UniqueName: \"kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.641697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.742948 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.743071 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvfp\" (UniqueName: \"kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.743098 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.743147 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxh7p\" (UniqueName: \"kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.743166 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.744104 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.778994 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxh7p\" (UniqueName: \"kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p\") pod \"dnsmasq-dns-675f4bcbfc-cxndf\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.814678 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.844948 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvfp\" (UniqueName: \"kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.845000 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.845106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.845845 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.846000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.864895 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvfp\" (UniqueName: \"kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp\") pod \"dnsmasq-dns-78dd6ddcc-xsdd2\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:24 crc kubenswrapper[4809]: I0226 14:37:24.921845 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:25 crc kubenswrapper[4809]: I0226 14:37:25.376546 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:25 crc kubenswrapper[4809]: I0226 14:37:25.482103 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:25 crc kubenswrapper[4809]: W0226 14:37:25.482943 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf729fd39_30eb_497d_a091_702565fdc270.slice/crio-31a5f9beaf345a3d9070f0d5240ab70b8ff24a6b4a4d117098111c95c7b1c676 WatchSource:0}: Error finding container 31a5f9beaf345a3d9070f0d5240ab70b8ff24a6b4a4d117098111c95c7b1c676: Status 404 returned error can't find the container with id 31a5f9beaf345a3d9070f0d5240ab70b8ff24a6b4a4d117098111c95c7b1c676 Feb 26 14:37:25 crc kubenswrapper[4809]: I0226 14:37:25.507271 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" event={"ID":"8f991d5d-8d77-416d-b585-9140c6411a65","Type":"ContainerStarted","Data":"a9783aea1ecca90c82298ee90c5f92d41ab6489e3ce220a288e5306a8c6514a3"} Feb 26 14:37:25 crc kubenswrapper[4809]: I0226 14:37:25.508590 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" event={"ID":"f729fd39-30eb-497d-a091-702565fdc270","Type":"ContainerStarted","Data":"31a5f9beaf345a3d9070f0d5240ab70b8ff24a6b4a4d117098111c95c7b1c676"} Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.370941 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.413841 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.415720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.432853 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.498172 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.498227 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.498540 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llsj4\" (UniqueName: \"kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.603049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llsj4\" (UniqueName: \"kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.603190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.603234 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.604380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.605338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.639232 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llsj4\" (UniqueName: \"kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4\") pod \"dnsmasq-dns-666b6646f7-t6mlx\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.739129 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.741617 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.771882 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.776516 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.804964 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.911680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.914084 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:27 crc kubenswrapper[4809]: I0226 14:37:27.914165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4ph\" (UniqueName: \"kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.015729 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.015810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.015860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4ph\" (UniqueName: \"kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.017688 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.021072 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.039125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4ph\" (UniqueName: \"kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph\") pod \"dnsmasq-dns-57d769cc4f-vk76m\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.198943 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.350975 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.549673 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.553693 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.556631 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.557512 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.557536 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.561076 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hwkn2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.561174 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.561301 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.561560 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.566931 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.580983 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.582800 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.596178 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.598292 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.624382 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.640639 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.706132 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737712 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737746 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737904 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737958 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.737987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738044 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738179 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738195 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738217 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdmx\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738431 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738525 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738746 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8l9\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lz9\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738871 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738934 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738952 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.738996 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.739051 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.739087 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.739106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.739310 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841285 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841624 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841645 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841670 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841746 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841782 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841931 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841977 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.841997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842061 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842148 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842179 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842219 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842236 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842250 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvdmx\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842278 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842332 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842390 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842481 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8l9\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lz9\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842648 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842666 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842700 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.842978 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.843548 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.844060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.845066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.845459 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.845507 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.846152 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.846293 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.846476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.847465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.847714 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.847935 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.848078 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.849071 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.849138 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.849740 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.849765 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/317625664eb02b71305e33edf97c9510b04dffcc4948cb871909f59709469599/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.850672 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.851418 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.851439 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bd655ee73774e45add2c059d6525cf05e0989d68eb1fadb6969bdbf604263d4/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.851480 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.851478 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.852480 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.852506 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1be914a20758f707d6b14a059f5596264bd58434ad39af7f125013f388c0c9c1/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.852520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.863552 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.866030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.868089 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lz9\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.868095 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.870267 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.870997 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.871815 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvdmx\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.874511 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.876200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8l9\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.901698 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.912075 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.950317 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.961962 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.962621 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.962946 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.963161 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.963306 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.965281 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ggb8f" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.968502 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.981188 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " pod="openstack/rabbitmq-server-1" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.986196 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:37:28 crc kubenswrapper[4809]: I0226 14:37:28.996559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " pod="openstack/rabbitmq-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.023408 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.062739 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.062846 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hsg8\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063137 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063174 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063234 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063365 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063438 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063463 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.063599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166060 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166141 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166197 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hsg8\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166571 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166942 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.166984 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.167002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.169586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.170640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.170806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.170938 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.172228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.172771 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.173734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.174102 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.174629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.183456 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.183495 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a3438c8e38dcffb88c2bb9cce9738361e0ac40dc000de58df3e22d6950d7f0c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.184848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hsg8\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.190084 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.227390 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.234234 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:37:29 crc kubenswrapper[4809]: I0226 14:37:29.291716 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.057212 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.061961 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.064356 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vwn4j" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.064476 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.065139 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.065382 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.073152 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.075734 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.188848 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-kolla-config\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.188938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-default\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189080 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189142 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqj7\" (UniqueName: \"kubernetes.io/projected/b25b5c98-b424-41ce-b099-876b266cf2be-kube-api-access-xhqj7\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189248 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189275 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.189294 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.292863 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.292913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.292944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.292990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-kolla-config\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.293033 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-default\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.293126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.293176 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.293199 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqj7\" (UniqueName: \"kubernetes.io/projected/b25b5c98-b424-41ce-b099-876b266cf2be-kube-api-access-xhqj7\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.294398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-kolla-config\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.294473 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-default\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.294799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b25b5c98-b424-41ce-b099-876b266cf2be-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.298830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.298898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b25b5c98-b424-41ce-b099-876b266cf2be-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.299113 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.299114 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b25b5c98-b424-41ce-b099-876b266cf2be-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.299132 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f078016dfdf4f368392e1b414f580ec3f0eabc6eaadb93f46ada73758cc5c23/globalmount\"" pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.315200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqj7\" (UniqueName: \"kubernetes.io/projected/b25b5c98-b424-41ce-b099-876b266cf2be-kube-api-access-xhqj7\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.353203 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d690ddbe-f5f8-4ae1-930b-2c04a8c3d961\") pod \"openstack-galera-0\" (UID: \"b25b5c98-b424-41ce-b099-876b266cf2be\") " pod="openstack/openstack-galera-0" Feb 26 14:37:30 crc kubenswrapper[4809]: I0226 14:37:30.388854 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.411024 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.417660 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.421943 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.422203 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.422374 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.422551 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zbvlk" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.432181 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518571 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518649 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24p9\" (UniqueName: \"kubernetes.io/projected/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kube-api-access-q24p9\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518699 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518730 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518860 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518912 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.518950 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622252 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622352 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24p9\" (UniqueName: \"kubernetes.io/projected/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kube-api-access-q24p9\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622612 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622660 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.622844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.628204 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.631678 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.632330 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.633253 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffb3d63b0e795d29a6e9c35497fcd097752e0a0ab0b4668225471bc057c7e8c7/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.636591 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.637692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4f21dca-3b2f-4818-8356-1de8cfbbc261-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.653191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.658527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24p9\" (UniqueName: \"kubernetes.io/projected/a4f21dca-3b2f-4818-8356-1de8cfbbc261-kube-api-access-q24p9\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.659799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f21dca-3b2f-4818-8356-1de8cfbbc261-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.666995 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.672166 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.679671 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s9qtw" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.680472 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.680625 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.691233 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.716056 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aab4c6fb-7197-4b12-8b51-4f242684bd21\") pod \"openstack-cell1-galera-0\" (UID: \"a4f21dca-3b2f-4818-8356-1de8cfbbc261\") " pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.728287 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-kolla-config\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.728465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-config-data\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.728498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w2zw\" (UniqueName: \"kubernetes.io/projected/3ceedead-6111-4ca8-b2ef-c97e503513eb-kube-api-access-6w2zw\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.728559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.728785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.759288 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.832815 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-config-data\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.832912 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w2zw\" (UniqueName: \"kubernetes.io/projected/3ceedead-6111-4ca8-b2ef-c97e503513eb-kube-api-access-6w2zw\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.832967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.833996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.833995 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-config-data\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.834124 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-kolla-config\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.835113 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ceedead-6111-4ca8-b2ef-c97e503513eb-kolla-config\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.843783 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.850911 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceedead-6111-4ca8-b2ef-c97e503513eb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:31 crc kubenswrapper[4809]: I0226 14:37:31.862559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w2zw\" (UniqueName: \"kubernetes.io/projected/3ceedead-6111-4ca8-b2ef-c97e503513eb-kube-api-access-6w2zw\") pod \"memcached-0\" (UID: \"3ceedead-6111-4ca8-b2ef-c97e503513eb\") " pod="openstack/memcached-0" Feb 26 14:37:32 crc kubenswrapper[4809]: I0226 14:37:32.071966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 14:37:32 crc kubenswrapper[4809]: I0226 14:37:32.614054 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" event={"ID":"b633e7ac-4c59-4281-a4e7-243da9f909c5","Type":"ContainerStarted","Data":"bb64dac75976a3bad59204965839e4cb95282748e502b5d8da525dcb242688e9"} Feb 26 14:37:32 crc kubenswrapper[4809]: I0226 14:37:32.615927 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" event={"ID":"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95","Type":"ContainerStarted","Data":"97ef69159c640ebe6435b676ab4c8aa6668564ac4cf8c90b7382c311a8c0e079"} Feb 26 14:37:33 crc kubenswrapper[4809]: I0226 14:37:33.175291 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.306021 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.308757 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.317885 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qcfzb" Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.354077 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.411115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dl8r\" (UniqueName: \"kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r\") pod \"kube-state-metrics-0\" (UID: \"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc\") " pod="openstack/kube-state-metrics-0" Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.512953 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dl8r\" (UniqueName: \"kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r\") pod \"kube-state-metrics-0\" (UID: \"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc\") " pod="openstack/kube-state-metrics-0" Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.545805 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dl8r\" (UniqueName: \"kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r\") pod \"kube-state-metrics-0\" (UID: \"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc\") " pod="openstack/kube-state-metrics-0" Feb 26 14:37:34 crc kubenswrapper[4809]: I0226 14:37:34.664830 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.102111 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.103863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.106761 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-gzpsd" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.107009 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.122158 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.241476 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2cg\" (UniqueName: \"kubernetes.io/projected/a1694e2c-b193-496d-b2df-d4c8857e2cc2-kube-api-access-dx2cg\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.242287 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1694e2c-b193-496d-b2df-d4c8857e2cc2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.343793 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1694e2c-b193-496d-b2df-d4c8857e2cc2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.344825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2cg\" (UniqueName: \"kubernetes.io/projected/a1694e2c-b193-496d-b2df-d4c8857e2cc2-kube-api-access-dx2cg\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.355904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1694e2c-b193-496d-b2df-d4c8857e2cc2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.380726 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2cg\" (UniqueName: \"kubernetes.io/projected/a1694e2c-b193-496d-b2df-d4c8857e2cc2-kube-api-access-dx2cg\") pod \"observability-ui-dashboards-66cbf594b5-vxfq4\" (UID: \"a1694e2c-b193-496d-b2df-d4c8857e2cc2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.447519 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.505307 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bdb486cc4-gfrth"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.508878 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.536920 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bdb486cc4-gfrth"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548518 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548570 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-service-ca\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-oauth-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548660 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-trusted-ca-bundle\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548744 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rll\" (UniqueName: \"kubernetes.io/projected/0019f68b-c93e-4130-89e7-3e2d7a471e56-kube-api-access-p2rll\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.548793 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-oauth-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.557578 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.561133 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.567468 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.567628 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d2lgm" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.567679 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.569099 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.569223 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.569317 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.572211 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.582873 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.589705 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650555 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rll\" (UniqueName: \"kubernetes.io/projected/0019f68b-c93e-4130-89e7-3e2d7a471e56-kube-api-access-p2rll\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650605 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650635 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650655 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdpb\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650693 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650726 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-oauth-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650808 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-service-ca\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650856 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650896 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650925 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-oauth-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650960 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.650985 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-trusted-ca-bundle\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.651781 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.652208 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-oauth-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.652456 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-service-ca\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.652745 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.652844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.653290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0019f68b-c93e-4130-89e7-3e2d7a471e56-trusted-ca-bundle\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.667611 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-oauth-config\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.675797 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0019f68b-c93e-4130-89e7-3e2d7a471e56-console-serving-cert\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.675810 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rll\" (UniqueName: \"kubernetes.io/projected/0019f68b-c93e-4130-89e7-3e2d7a471e56-kube-api-access-p2rll\") pod \"console-bdb486cc4-gfrth\" (UID: \"0019f68b-c93e-4130-89e7-3e2d7a471e56\") " pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.754932 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755317 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755453 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755548 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdpb\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755605 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.755937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.756471 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.756547 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.758587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.760187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.762209 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.762250 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d633318c6bbf353b83d49b28dc3a043863b879cab9b57f6fec512583333cd15/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.762206 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.762464 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.763027 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.772879 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdpb\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.809371 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.827965 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:35 crc kubenswrapper[4809]: I0226 14:37:35.899404 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.145794 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mctbl"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.147731 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.153280 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qv42d" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.153299 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.153695 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.158978 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bld69"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.161788 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186224 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-combined-ca-bundle\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-lib\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqsxk\" (UniqueName: \"kubernetes.io/projected/3807a00a-1120-4344-9a7b-6522b0f3099b-kube-api-access-jqsxk\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-etc-ovs\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186537 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3807a00a-1120-4344-9a7b-6522b0f3099b-scripts\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-log\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86ac277e-d27e-4d56-b145-244a494765fb-scripts\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186662 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-log-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186688 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-ovn-controller-tls-certs\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186722 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186741 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186775 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kccn\" (UniqueName: \"kubernetes.io/projected/86ac277e-d27e-4d56-b145-244a494765fb-kube-api-access-7kccn\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.186809 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-run\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.190888 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mctbl"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.212574 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bld69"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.288721 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-ovn-controller-tls-certs\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289227 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289823 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kccn\" (UniqueName: \"kubernetes.io/projected/86ac277e-d27e-4d56-b145-244a494765fb-kube-api-access-7kccn\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-run\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-combined-ca-bundle\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-lib\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290035 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqsxk\" (UniqueName: \"kubernetes.io/projected/3807a00a-1120-4344-9a7b-6522b0f3099b-kube-api-access-jqsxk\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290073 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-etc-ovs\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290180 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-etc-ovs\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290279 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3807a00a-1120-4344-9a7b-6522b0f3099b-scripts\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290312 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-log\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86ac277e-d27e-4d56-b145-244a494765fb-scripts\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-log-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.290706 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-run\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.289759 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-run-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.291220 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-lib\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.291462 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86ac277e-d27e-4d56-b145-244a494765fb-var-log\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.291599 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3807a00a-1120-4344-9a7b-6522b0f3099b-var-log-ovn\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.293218 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86ac277e-d27e-4d56-b145-244a494765fb-scripts\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.293406 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3807a00a-1120-4344-9a7b-6522b0f3099b-scripts\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.294973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-ovn-controller-tls-certs\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.295551 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3807a00a-1120-4344-9a7b-6522b0f3099b-combined-ca-bundle\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.305956 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kccn\" (UniqueName: \"kubernetes.io/projected/86ac277e-d27e-4d56-b145-244a494765fb-kube-api-access-7kccn\") pod \"ovn-controller-ovs-bld69\" (UID: \"86ac277e-d27e-4d56-b145-244a494765fb\") " pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.307381 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqsxk\" (UniqueName: \"kubernetes.io/projected/3807a00a-1120-4344-9a7b-6522b0f3099b-kube-api-access-jqsxk\") pod \"ovn-controller-mctbl\" (UID: \"3807a00a-1120-4344-9a7b-6522b0f3099b\") " pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.345466 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.348278 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.351009 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.351096 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.351242 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.351399 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.351659 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tltr7" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.359434 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.394538 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.394754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.394932 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.395072 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.395169 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.395236 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.395274 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr2k\" (UniqueName: \"kubernetes.io/projected/a357022d-35fa-453a-82df-d4726ce47a6a-kube-api-access-xgr2k\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.395314 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.471211 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.491198 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.499510 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.499576 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.499622 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.500936 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501319 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgr2k\" (UniqueName: \"kubernetes.io/projected/a357022d-35fa-453a-82df-d4726ce47a6a-kube-api-access-xgr2k\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501411 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501465 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.501887 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.502427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a357022d-35fa-453a-82df-d4726ce47a6a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.503810 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.504843 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.504875 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c42ca342bde3d959edda2699d8d1b1323ed34400a35536f884159c3eb0a67ba/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.505315 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.508048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a357022d-35fa-453a-82df-d4726ce47a6a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.520109 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgr2k\" (UniqueName: \"kubernetes.io/projected/a357022d-35fa-453a-82df-d4726ce47a6a-kube-api-access-xgr2k\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.548191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2edc39fb-698e-427e-a5cf-ddce77a5f9ad\") pod \"ovsdbserver-sb-0\" (UID: \"a357022d-35fa-453a-82df-d4726ce47a6a\") " pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:37 crc kubenswrapper[4809]: I0226 14:37:37.712266 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 14:37:38 crc kubenswrapper[4809]: I0226 14:37:38.671661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerStarted","Data":"7dc581f248432881e590539ff2e3e243aec323dd954bc34de064ad69cb4016b1"} Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.242768 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.246781 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.251109 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.251293 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mnnhp" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.251403 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.251504 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.262098 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404149 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404540 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404604 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-586375ff-a324-4c8d-b798-dba1bd40173d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-586375ff-a324-4c8d-b798-dba1bd40173d\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404704 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404733 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.404811 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.405109 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.405201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n82f\" (UniqueName: \"kubernetes.io/projected/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-kube-api-access-4n82f\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506746 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506788 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-586375ff-a324-4c8d-b798-dba1bd40173d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-586375ff-a324-4c8d-b798-dba1bd40173d\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506849 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506874 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506899 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506959 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.506996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n82f\" (UniqueName: \"kubernetes.io/projected/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-kube-api-access-4n82f\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.507919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.509238 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.509743 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.509772 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-586375ff-a324-4c8d-b798-dba1bd40173d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-586375ff-a324-4c8d-b798-dba1bd40173d\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ba5ea770fc1d1e7419c6122d43bfcfacedcc74c97d54ec269007441bf140571/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.510689 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.513109 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.513735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.516125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.527091 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n82f\" (UniqueName: \"kubernetes.io/projected/c3e7cf46-b165-4cb9-9249-286b9ef0a2c4-kube-api-access-4n82f\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.549075 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-586375ff-a324-4c8d-b798-dba1bd40173d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-586375ff-a324-4c8d-b798-dba1bd40173d\") pod \"ovsdbserver-nb-0\" (UID: \"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4\") " pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:41 crc kubenswrapper[4809]: I0226 14:37:41.586629 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.710091 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.710943 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plvfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xsdd2_openstack(f729fd39-30eb-497d-a091-702565fdc270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.712518 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.712551 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" podUID="f729fd39-30eb-497d-a091-702565fdc270" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.712671 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw4ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vk76m_openstack(b633e7ac-4c59-4281-a4e7-243da9f909c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.713848 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.722772 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.722936 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxh7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-cxndf_openstack(8f991d5d-8d77-416d-b585-9140c6411a65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.724221 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" podUID="8f991d5d-8d77-416d-b585-9140c6411a65" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.727283 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.727407 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llsj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-t6mlx_openstack(0a0a1623-b934-4e9f-8cb7-0393fa0dcb95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.729763 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.779441 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" Feb 26 14:37:44 crc kubenswrapper[4809]: E0226 14:37:44.779452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.670649 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:37:45 crc kubenswrapper[4809]: W0226 14:37:45.691549 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf375c9b0_076d_4c28_adde_74405cf866bc.slice/crio-b47c7c94a46fc59e09925d36dd8dd8002c87e7b8f5bc985b4396fc84a098e9e1 WatchSource:0}: Error finding container b47c7c94a46fc59e09925d36dd8dd8002c87e7b8f5bc985b4396fc84a098e9e1: Status 404 returned error can't find the container with id b47c7c94a46fc59e09925d36dd8dd8002c87e7b8f5bc985b4396fc84a098e9e1 Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.757709 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.765788 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" event={"ID":"f729fd39-30eb-497d-a091-702565fdc270","Type":"ContainerDied","Data":"31a5f9beaf345a3d9070f0d5240ab70b8ff24a6b4a4d117098111c95c7b1c676"} Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.765839 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xsdd2" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.766876 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.768341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerStarted","Data":"b47c7c94a46fc59e09925d36dd8dd8002c87e7b8f5bc985b4396fc84a098e9e1"} Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.780278 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" event={"ID":"8f991d5d-8d77-416d-b585-9140c6411a65","Type":"ContainerDied","Data":"a9783aea1ecca90c82298ee90c5f92d41ab6489e3ce220a288e5306a8c6514a3"} Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.780397 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cxndf" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840001 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config\") pod \"8f991d5d-8d77-416d-b585-9140c6411a65\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840246 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config\") pod \"f729fd39-30eb-497d-a091-702565fdc270\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840276 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxh7p\" (UniqueName: \"kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p\") pod \"8f991d5d-8d77-416d-b585-9140c6411a65\" (UID: \"8f991d5d-8d77-416d-b585-9140c6411a65\") " Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840303 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvfp\" (UniqueName: \"kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp\") pod \"f729fd39-30eb-497d-a091-702565fdc270\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840356 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc\") pod \"f729fd39-30eb-497d-a091-702565fdc270\" (UID: \"f729fd39-30eb-497d-a091-702565fdc270\") " Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840838 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f729fd39-30eb-497d-a091-702565fdc270" (UID: "f729fd39-30eb-497d-a091-702565fdc270"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840908 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config" (OuterVolumeSpecName: "config") pod "f729fd39-30eb-497d-a091-702565fdc270" (UID: "f729fd39-30eb-497d-a091-702565fdc270"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.840928 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.841071 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config" (OuterVolumeSpecName: "config") pod "8f991d5d-8d77-416d-b585-9140c6411a65" (UID: "8f991d5d-8d77-416d-b585-9140c6411a65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.853218 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp" (OuterVolumeSpecName: "kube-api-access-plvfp") pod "f729fd39-30eb-497d-a091-702565fdc270" (UID: "f729fd39-30eb-497d-a091-702565fdc270"). InnerVolumeSpecName "kube-api-access-plvfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.865092 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p" (OuterVolumeSpecName: "kube-api-access-rxh7p") pod "8f991d5d-8d77-416d-b585-9140c6411a65" (UID: "8f991d5d-8d77-416d-b585-9140c6411a65"). InnerVolumeSpecName "kube-api-access-rxh7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.943390 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729fd39-30eb-497d-a091-702565fdc270-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.943435 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxh7p\" (UniqueName: \"kubernetes.io/projected/8f991d5d-8d77-416d-b585-9140c6411a65-kube-api-access-rxh7p\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.943449 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvfp\" (UniqueName: \"kubernetes.io/projected/f729fd39-30eb-497d-a091-702565fdc270-kube-api-access-plvfp\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:45 crc kubenswrapper[4809]: I0226 14:37:45.943460 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f991d5d-8d77-416d-b585-9140c6411a65-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.140730 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.152890 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xsdd2"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.191842 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.201093 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cxndf"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.312531 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f991d5d-8d77-416d-b585-9140c6411a65" path="/var/lib/kubelet/pods/8f991d5d-8d77-416d-b585-9140c6411a65/volumes" Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.316253 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f729fd39-30eb-497d-a091-702565fdc270" path="/var/lib/kubelet/pods/f729fd39-30eb-497d-a091-702565fdc270/volumes" Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.821038 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.839893 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bdb486cc4-gfrth"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.851630 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.918218 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.933072 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.949316 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.957424 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 14:37:46 crc kubenswrapper[4809]: I0226 14:37:46.968584 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:37:47 crc kubenswrapper[4809]: I0226 14:37:47.075594 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4"] Feb 26 14:37:47 crc kubenswrapper[4809]: I0226 14:37:47.103195 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mctbl"] Feb 26 14:37:47 crc kubenswrapper[4809]: I0226 14:37:47.255197 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 14:37:47 crc kubenswrapper[4809]: I0226 14:37:47.348214 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bld69"] Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.118188 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 14:37:48 crc kubenswrapper[4809]: W0226 14:37:48.158494 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f21dca_3b2f_4818_8356_1de8cfbbc261.slice/crio-8467464898f7db751d71dc5e3f0f6b255996c60fee77ba207a860155f774f941 WatchSource:0}: Error finding container 8467464898f7db751d71dc5e3f0f6b255996c60fee77ba207a860155f774f941: Status 404 returned error can't find the container with id 8467464898f7db751d71dc5e3f0f6b255996c60fee77ba207a860155f774f941 Feb 26 14:37:48 crc kubenswrapper[4809]: W0226 14:37:48.179181 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0019f68b_c93e_4130_89e7_3e2d7a471e56.slice/crio-941c74e37f9c7da1f47585a71a66993afd493ba618f52db6cb91a8bfc9b2f758 WatchSource:0}: Error finding container 941c74e37f9c7da1f47585a71a66993afd493ba618f52db6cb91a8bfc9b2f758: Status 404 returned error can't find the container with id 941c74e37f9c7da1f47585a71a66993afd493ba618f52db6cb91a8bfc9b2f758 Feb 26 14:37:48 crc kubenswrapper[4809]: W0226 14:37:48.186363 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94b1d0fc_c81e_40db_a043_fd5992788567.slice/crio-ce5f1a6f7a265ff01991096032e94f8d423b028af22308b9e3b452fd3933581b WatchSource:0}: Error finding container ce5f1a6f7a265ff01991096032e94f8d423b028af22308b9e3b452fd3933581b: Status 404 returned error can't find the container with id ce5f1a6f7a265ff01991096032e94f8d423b028af22308b9e3b452fd3933581b Feb 26 14:37:48 crc kubenswrapper[4809]: W0226 14:37:48.194333 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceedead_6111_4ca8_b2ef_c97e503513eb.slice/crio-e9dddda904caa32a1f478d779fee8574c69bbbcb15a0e167e7fabb00964090c3 WatchSource:0}: Error finding container e9dddda904caa32a1f478d779fee8574c69bbbcb15a0e167e7fabb00964090c3: Status 404 returned error can't find the container with id e9dddda904caa32a1f478d779fee8574c69bbbcb15a0e167e7fabb00964090c3 Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.243957 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh67dh57bhcfh596hfbh5bh7ch5f6h59bh664hf9h66dh66bhd7h5c6hch67bh647h76h5d5h9bh67fh67dh54bh5d6h68ch65dh5fh696hdfhf6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqsxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-mctbl_openstack(3807a00a-1120-4344-9a7b-6522b0f3099b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.244247 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf6h696h685h647h59bh64ch99hcchd4h646h79h59dh6fh65bhfh8bh594h688h65ch5bch669h55bh5ddhdh78h659hbchc6h658h58dh5f9h9bq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4n82f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(c3e7cf46-b165-4cb9-9249-286b9ef0a2c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.246470 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/ovn-controller-mctbl" podUID="3807a00a-1120-4344-9a7b-6522b0f3099b" Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.264577 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nf6h696h685h647h59bh64ch99hcchd4h646h79h59dh6fh65bhfh8bh594h688h65ch5bch669h55bh5ddhdh78h659hbchc6h658h58dh5f9h9bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4n82f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(c3e7cf46-b165-4cb9-9249-286b9ef0a2c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.265816 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="c3e7cf46-b165-4cb9-9249-286b9ef0a2c4" Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.822744 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerStarted","Data":"a1b62e13af38a597415ae0ab25b6c8b2f8f881fb000bc1deaa3119fbaafa4683"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.823868 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a357022d-35fa-453a-82df-d4726ce47a6a","Type":"ContainerStarted","Data":"fdeff9a965181397ae3ebe67fb2424d52ae269f26be6062d3ca963761c0d1058"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.824831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerStarted","Data":"8ae1144f99427594933fbcfa83c2d412ec9a54f87474121c4cbe14fc526f07ed"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.825681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerStarted","Data":"8467464898f7db751d71dc5e3f0f6b255996c60fee77ba207a860155f774f941"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.826477 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerStarted","Data":"ce5f1a6f7a265ff01991096032e94f8d423b028af22308b9e3b452fd3933581b"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.827374 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mctbl" event={"ID":"3807a00a-1120-4344-9a7b-6522b0f3099b","Type":"ContainerStarted","Data":"d7b2fc7c9d20f464fe4de8e6f3f405a6d06ecf68c19fa03bad3176dd9b403f05"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.828548 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bdb486cc4-gfrth" event={"ID":"0019f68b-c93e-4130-89e7-3e2d7a471e56","Type":"ContainerStarted","Data":"075707a8914df1319173d4ebb639ad7d0a14274cac101a657586987829542954"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.828583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bdb486cc4-gfrth" event={"ID":"0019f68b-c93e-4130-89e7-3e2d7a471e56","Type":"ContainerStarted","Data":"941c74e37f9c7da1f47585a71a66993afd493ba618f52db6cb91a8bfc9b2f758"} Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.830890 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-mctbl" podUID="3807a00a-1120-4344-9a7b-6522b0f3099b" Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.831612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc","Type":"ContainerStarted","Data":"4414b93965dbf4b7141982b5e1856273b67c380be8763e50988b33380d9af11e"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.833519 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerStarted","Data":"4dc6a7e4140a3131748613c63d4741a5dfddd89bfc70cf3e88790e27a17f1c74"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.835790 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4","Type":"ContainerStarted","Data":"118486e7d17dd721d91e1eff2b3e819ee11c78dc9b5436737d14e7d8d834ffbd"} Feb 26 14:37:48 crc kubenswrapper[4809]: E0226 14:37:48.837681 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="c3e7cf46-b165-4cb9-9249-286b9ef0a2c4" Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.838719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bld69" event={"ID":"86ac277e-d27e-4d56-b145-244a494765fb","Type":"ContainerStarted","Data":"6cfee6ae06a852d24e327885ad299afc35ace6af402f8a287cad10613cc09d9f"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.840062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3ceedead-6111-4ca8-b2ef-c97e503513eb","Type":"ContainerStarted","Data":"e9dddda904caa32a1f478d779fee8574c69bbbcb15a0e167e7fabb00964090c3"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.855216 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" event={"ID":"a1694e2c-b193-496d-b2df-d4c8857e2cc2","Type":"ContainerStarted","Data":"04ec92e933d127a51c32810311630c48820baae8d31db57ab78abf3e817d6343"} Feb 26 14:37:48 crc kubenswrapper[4809]: I0226 14:37:48.886081 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bdb486cc4-gfrth" podStartSLOduration=13.886060111 podStartE2EDuration="13.886060111s" podCreationTimestamp="2026-02-26 14:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:37:48.868439651 +0000 UTC m=+1447.341760204" watchObservedRunningTime="2026-02-26 14:37:48.886060111 +0000 UTC m=+1447.359380634" Feb 26 14:37:49 crc kubenswrapper[4809]: E0226 14:37:49.869587 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-mctbl" podUID="3807a00a-1120-4344-9a7b-6522b0f3099b" Feb 26 14:37:49 crc kubenswrapper[4809]: E0226 14:37:49.869998 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="c3e7cf46-b165-4cb9-9249-286b9ef0a2c4" Feb 26 14:37:51 crc kubenswrapper[4809]: I0226 14:37:51.890997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerStarted","Data":"92e81cc7c063f704ca11a1f2d5e5c240fa2b9fed516b8e5beefe4d1a6fee7d42"} Feb 26 14:37:51 crc kubenswrapper[4809]: I0226 14:37:51.892962 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerStarted","Data":"c841726f0c29effa2d9e38f839e1468c20a21a08bf986b3b1775e124fb367a95"} Feb 26 14:37:51 crc kubenswrapper[4809]: I0226 14:37:51.896920 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerStarted","Data":"b5a794cb606575426cb262a59cb8e194a419febe2842acf21e046bbdc5123016"} Feb 26 14:37:51 crc kubenswrapper[4809]: I0226 14:37:51.899503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerStarted","Data":"9d4e9f94eba27283b34ce01c7f379079b4b8e5018367754a17160346d861d189"} Feb 26 14:37:55 crc kubenswrapper[4809]: I0226 14:37:55.829691 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:55 crc kubenswrapper[4809]: I0226 14:37:55.829996 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:55 crc kubenswrapper[4809]: I0226 14:37:55.839165 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:55 crc kubenswrapper[4809]: I0226 14:37:55.943279 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 14:37:56 crc kubenswrapper[4809]: I0226 14:37:56.022475 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.971001 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bld69" event={"ID":"86ac277e-d27e-4d56-b145-244a494765fb","Type":"ContainerStarted","Data":"e55114a6992534fdce1d9ff8a4cde276c3dc54f9aaaee10f4682869b15ebe974"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.972742 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3ceedead-6111-4ca8-b2ef-c97e503513eb","Type":"ContainerStarted","Data":"ed08236d108573a87be90de636a3ca0e9957b44dd72f53f753428a78e9d9692c"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.973002 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.974844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc","Type":"ContainerStarted","Data":"6a4c4cf1ed575464012fc20bc2a7cf0933298c8246b9e5e93716963d196cf9d0"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.975636 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.977400 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" event={"ID":"a1694e2c-b193-496d-b2df-d4c8857e2cc2","Type":"ContainerStarted","Data":"368a380f844ad616c89722cc9f5b7bed394cccccdd485d6e647fa8baf87fa36a"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.978719 4809 generic.go:334] "Generic (PLEG): container finished" podID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerID="cb1d3736bec2cc70d244631669f19dc6d0653e9f9bdb87630d10d5c7a4c5c5f8" exitCode=0 Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.978784 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" event={"ID":"b633e7ac-4c59-4281-a4e7-243da9f909c5","Type":"ContainerDied","Data":"cb1d3736bec2cc70d244631669f19dc6d0653e9f9bdb87630d10d5c7a4c5c5f8"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.980433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a357022d-35fa-453a-82df-d4726ce47a6a","Type":"ContainerStarted","Data":"e22b66e1c2bb17d28c07d80172ea06fce0a7a02f31fb876c3b37930024f3134c"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.982043 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerStarted","Data":"8ed5d36d1f41dbe8e1dcf411a87cdfa674844298f618733b860a727f7f17193c"} Feb 26 14:37:58 crc kubenswrapper[4809]: I0226 14:37:58.990243 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerStarted","Data":"829c341aa42c1e2068668ac12a19c004ee604f5897609eb8a0edc4c58cb3d0aa"} Feb 26 14:37:59 crc kubenswrapper[4809]: I0226 14:37:59.026609 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.193924668 podStartE2EDuration="25.026591411s" podCreationTimestamp="2026-02-26 14:37:34 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.187880198 +0000 UTC m=+1446.661200711" lastFinishedPulling="2026-02-26 14:37:58.020546921 +0000 UTC m=+1456.493867454" observedRunningTime="2026-02-26 14:37:59.021877177 +0000 UTC m=+1457.495197700" watchObservedRunningTime="2026-02-26 14:37:59.026591411 +0000 UTC m=+1457.499911934" Feb 26 14:37:59 crc kubenswrapper[4809]: I0226 14:37:59.120081 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vxfq4" podStartSLOduration=14.468705565 podStartE2EDuration="24.120063033s" podCreationTimestamp="2026-02-26 14:37:35 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.196366789 +0000 UTC m=+1446.669687312" lastFinishedPulling="2026-02-26 14:37:57.847724257 +0000 UTC m=+1456.321044780" observedRunningTime="2026-02-26 14:37:59.112811908 +0000 UTC m=+1457.586132441" watchObservedRunningTime="2026-02-26 14:37:59.120063033 +0000 UTC m=+1457.593383556" Feb 26 14:37:59 crc kubenswrapper[4809]: I0226 14:37:59.133409 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.480632205 podStartE2EDuration="28.133387912s" podCreationTimestamp="2026-02-26 14:37:31 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.196770451 +0000 UTC m=+1446.670090974" lastFinishedPulling="2026-02-26 14:37:57.849526158 +0000 UTC m=+1456.322846681" observedRunningTime="2026-02-26 14:37:59.129287135 +0000 UTC m=+1457.602607658" watchObservedRunningTime="2026-02-26 14:37:59.133387912 +0000 UTC m=+1457.606708455" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.000835 4809 generic.go:334] "Generic (PLEG): container finished" podID="86ac277e-d27e-4d56-b145-244a494765fb" containerID="e55114a6992534fdce1d9ff8a4cde276c3dc54f9aaaee10f4682869b15ebe974" exitCode=0 Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.000924 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bld69" event={"ID":"86ac277e-d27e-4d56-b145-244a494765fb","Type":"ContainerDied","Data":"e55114a6992534fdce1d9ff8a4cde276c3dc54f9aaaee10f4682869b15ebe974"} Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.004880 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" event={"ID":"b633e7ac-4c59-4281-a4e7-243da9f909c5","Type":"ContainerStarted","Data":"04ca50d87ab24135e0bf7edebeb36514c948cb55bc9b7cebda8e71fbca448ce8"} Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.005202 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.009380 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerID="3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026" exitCode=0 Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.009490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" event={"ID":"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95","Type":"ContainerDied","Data":"3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026"} Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.146384 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" podStartSLOduration=7.303875546 podStartE2EDuration="33.146349868s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:32.191678265 +0000 UTC m=+1430.664998788" lastFinishedPulling="2026-02-26 14:37:58.034152577 +0000 UTC m=+1456.507473110" observedRunningTime="2026-02-26 14:38:00.070400683 +0000 UTC m=+1458.543721436" watchObservedRunningTime="2026-02-26 14:38:00.146349868 +0000 UTC m=+1458.619670391" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.152303 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535278-hr5gw"] Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.154253 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.157365 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.157486 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.157979 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.167761 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-hr5gw"] Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.315132 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxj9\" (UniqueName: \"kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9\") pod \"auto-csr-approver-29535278-hr5gw\" (UID: \"3ec3502a-e50f-4840-a833-9e97d7649127\") " pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.417053 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxj9\" (UniqueName: \"kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9\") pod \"auto-csr-approver-29535278-hr5gw\" (UID: \"3ec3502a-e50f-4840-a833-9e97d7649127\") " pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.446000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxj9\" (UniqueName: \"kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9\") pod \"auto-csr-approver-29535278-hr5gw\" (UID: \"3ec3502a-e50f-4840-a833-9e97d7649127\") " pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:00 crc kubenswrapper[4809]: I0226 14:38:00.494226 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:01 crc kubenswrapper[4809]: I0226 14:38:01.023512 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerStarted","Data":"4083f5c5a82bc4f7ce87948f52bae972188ad54d1b9c8efd42590ccf9611731d"} Feb 26 14:38:02 crc kubenswrapper[4809]: I0226 14:38:02.620629 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-hr5gw"] Feb 26 14:38:02 crc kubenswrapper[4809]: W0226 14:38:02.630480 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ec3502a_e50f_4840_a833_9e97d7649127.slice/crio-695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9 WatchSource:0}: Error finding container 695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9: Status 404 returned error can't find the container with id 695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9 Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.054839 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bld69" event={"ID":"86ac277e-d27e-4d56-b145-244a494765fb","Type":"ContainerStarted","Data":"61c8820d6706097207b4dd22075475f5e7948cdf92891c3821e4885fdf2f0681"} Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.054888 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bld69" event={"ID":"86ac277e-d27e-4d56-b145-244a494765fb","Type":"ContainerStarted","Data":"b23be9dae48d70f553963832a3a8e15d132970cc357c6011869492ddcc63c1c5"} Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.055373 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.055459 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.056857 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" event={"ID":"3ec3502a-e50f-4840-a833-9e97d7649127","Type":"ContainerStarted","Data":"695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9"} Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.059666 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" event={"ID":"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95","Type":"ContainerStarted","Data":"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e"} Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.060692 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.095987 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bld69" podStartSLOduration=16.444315417 podStartE2EDuration="26.095968973s" podCreationTimestamp="2026-02-26 14:37:37 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.196520683 +0000 UTC m=+1446.669841216" lastFinishedPulling="2026-02-26 14:37:57.848174249 +0000 UTC m=+1456.321494772" observedRunningTime="2026-02-26 14:38:03.090095276 +0000 UTC m=+1461.563415819" watchObservedRunningTime="2026-02-26 14:38:03.095968973 +0000 UTC m=+1461.569289496" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.118792 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" podStartSLOduration=-9223372000.73601 podStartE2EDuration="36.1187669s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:32.191637894 +0000 UTC m=+1430.664958417" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:03.111868944 +0000 UTC m=+1461.585189487" watchObservedRunningTime="2026-02-26 14:38:03.1187669 +0000 UTC m=+1461.592087423" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.251891 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-68qnw"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.254132 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.265985 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.301089 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-68qnw"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.378754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovn-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.379667 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mflh\" (UniqueName: \"kubernetes.io/projected/ce5afc58-7519-4c58-97e2-467468246721-kube-api-access-2mflh\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.379771 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-combined-ca-bundle\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.379834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce5afc58-7519-4c58-97e2-467468246721-config\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.379942 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.381490 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovs-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.450481 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.450771 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="dnsmasq-dns" containerID="cri-o://04ca50d87ab24135e0bf7edebeb36514c948cb55bc9b7cebda8e71fbca448ce8" gracePeriod=10 Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.471240 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.476261 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.479407 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.502696 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovn-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.502862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mflh\" (UniqueName: \"kubernetes.io/projected/ce5afc58-7519-4c58-97e2-467468246721-kube-api-access-2mflh\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.502915 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-combined-ca-bundle\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.502944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce5afc58-7519-4c58-97e2-467468246721-config\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.502991 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.503084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovs-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.503479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovs-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.503540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce5afc58-7519-4c58-97e2-467468246721-ovn-rundir\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.516744 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce5afc58-7519-4c58-97e2-467468246721-config\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.526122 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.531228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.546716 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mflh\" (UniqueName: \"kubernetes.io/projected/ce5afc58-7519-4c58-97e2-467468246721-kube-api-access-2mflh\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.616767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.616909 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.616916 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5afc58-7519-4c58-97e2-467468246721-combined-ca-bundle\") pod \"ovn-controller-metrics-68qnw\" (UID: \"ce5afc58-7519-4c58-97e2-467468246721\") " pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.617202 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.617446 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vdm\" (UniqueName: \"kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.692967 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.724388 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.728392 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.728769 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.728878 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.728968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vdm\" (UniqueName: \"kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.729059 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.729858 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.732826 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.733522 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.733771 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.756223 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vdm\" (UniqueName: \"kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm\") pod \"dnsmasq-dns-7fd796d7df-cjv29\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.758170 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.831752 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjphf\" (UniqueName: \"kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.832692 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.832879 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.833139 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.833276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.888525 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-68qnw" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.935460 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.935827 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.936763 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.936886 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjphf\" (UniqueName: \"kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.937206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.936918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.937419 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.938161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.938826 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.946384 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:03 crc kubenswrapper[4809]: I0226 14:38:03.975386 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjphf\" (UniqueName: \"kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf\") pod \"dnsmasq-dns-86db49b7ff-46pd9\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:04 crc kubenswrapper[4809]: I0226 14:38:04.050338 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:04 crc kubenswrapper[4809]: I0226 14:38:04.682293 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 14:38:04 crc kubenswrapper[4809]: I0226 14:38:04.824089 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:04 crc kubenswrapper[4809]: I0226 14:38:04.854679 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:04 crc kubenswrapper[4809]: I0226 14:38:04.997467 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-68qnw"] Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.080588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" event={"ID":"72eac997-6781-4644-8b2f-0031260b6360","Type":"ContainerStarted","Data":"62605a745ac3fac8a22bdacc47803b305d24063f6ce2180f51509a67e96a30c1"} Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.084881 4809 generic.go:334] "Generic (PLEG): container finished" podID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerID="04ca50d87ab24135e0bf7edebeb36514c948cb55bc9b7cebda8e71fbca448ce8" exitCode=0 Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.084983 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" event={"ID":"b633e7ac-4c59-4281-a4e7-243da9f909c5","Type":"ContainerDied","Data":"04ca50d87ab24135e0bf7edebeb36514c948cb55bc9b7cebda8e71fbca448ce8"} Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.086421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a357022d-35fa-453a-82df-d4726ce47a6a","Type":"ContainerStarted","Data":"146b77dbb12ad83ed754803c9e60c5f45e324d208b7260009c867e40b38de47e"} Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.090090 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-68qnw" event={"ID":"ce5afc58-7519-4c58-97e2-467468246721","Type":"ContainerStarted","Data":"5a62e364c6d04841bb0abf42cab35e294be7cc7a979944982b9548764392e6e9"} Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.091839 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="dnsmasq-dns" containerID="cri-o://18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e" gracePeriod=10 Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.092202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" event={"ID":"da116b61-a038-4d71-8e1f-9269df669d13","Type":"ContainerStarted","Data":"20fe6bf252da0ee31f7bb7246940e0c06468cee62057e44e5de4699619647562"} Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.127174 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.145598107 podStartE2EDuration="29.127129293s" podCreationTimestamp="2026-02-26 14:37:36 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.243752244 +0000 UTC m=+1446.717072767" lastFinishedPulling="2026-02-26 14:38:04.22528342 +0000 UTC m=+1462.698603953" observedRunningTime="2026-02-26 14:38:05.114414612 +0000 UTC m=+1463.587735135" watchObservedRunningTime="2026-02-26 14:38:05.127129293 +0000 UTC m=+1463.600449806" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.211988 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.295283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc\") pod \"b633e7ac-4c59-4281-a4e7-243da9f909c5\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.295427 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config\") pod \"b633e7ac-4c59-4281-a4e7-243da9f909c5\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.295618 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4ph\" (UniqueName: \"kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph\") pod \"b633e7ac-4c59-4281-a4e7-243da9f909c5\" (UID: \"b633e7ac-4c59-4281-a4e7-243da9f909c5\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.302145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph" (OuterVolumeSpecName: "kube-api-access-xw4ph") pod "b633e7ac-4c59-4281-a4e7-243da9f909c5" (UID: "b633e7ac-4c59-4281-a4e7-243da9f909c5"). InnerVolumeSpecName "kube-api-access-xw4ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.370110 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config" (OuterVolumeSpecName: "config") pod "b633e7ac-4c59-4281-a4e7-243da9f909c5" (UID: "b633e7ac-4c59-4281-a4e7-243da9f909c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.385424 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b633e7ac-4c59-4281-a4e7-243da9f909c5" (UID: "b633e7ac-4c59-4281-a4e7-243da9f909c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.398291 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.398324 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4ph\" (UniqueName: \"kubernetes.io/projected/b633e7ac-4c59-4281-a4e7-243da9f909c5-kube-api-access-xw4ph\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.398334 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b633e7ac-4c59-4281-a4e7-243da9f909c5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.833808 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.914735 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config\") pod \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.914867 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llsj4\" (UniqueName: \"kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4\") pod \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.915050 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc\") pod \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\" (UID: \"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95\") " Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.935413 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4" (OuterVolumeSpecName: "kube-api-access-llsj4") pod "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" (UID: "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95"). InnerVolumeSpecName "kube-api-access-llsj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.969912 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config" (OuterVolumeSpecName: "config") pod "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" (UID: "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:05 crc kubenswrapper[4809]: I0226 14:38:05.972059 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" (UID: "0a0a1623-b934-4e9f-8cb7-0393fa0dcb95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.018179 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.018219 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llsj4\" (UniqueName: \"kubernetes.io/projected/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-kube-api-access-llsj4\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.018231 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.125944 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerID="18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e" exitCode=0 Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.126038 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" event={"ID":"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95","Type":"ContainerDied","Data":"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.126084 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" event={"ID":"0a0a1623-b934-4e9f-8cb7-0393fa0dcb95","Type":"ContainerDied","Data":"97ef69159c640ebe6435b676ab4c8aa6668564ac4cf8c90b7382c311a8c0e079"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.126104 4809 scope.go:117] "RemoveContainer" containerID="18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.126234 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t6mlx" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.133963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-68qnw" event={"ID":"ce5afc58-7519-4c58-97e2-467468246721","Type":"ContainerStarted","Data":"42bcb6acfd0eea1c0f7a367ac575cdcb39732783771b7185e34d92cb375387ec"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.135729 4809 generic.go:334] "Generic (PLEG): container finished" podID="da116b61-a038-4d71-8e1f-9269df669d13" containerID="c85d9846dd5925d54f7dd45bdc9146fd2af6c1ae7de6969831684f0ceb6518f9" exitCode=0 Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.135784 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" event={"ID":"da116b61-a038-4d71-8e1f-9269df669d13","Type":"ContainerDied","Data":"c85d9846dd5925d54f7dd45bdc9146fd2af6c1ae7de6969831684f0ceb6518f9"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.140878 4809 generic.go:334] "Generic (PLEG): container finished" podID="72eac997-6781-4644-8b2f-0031260b6360" containerID="7c44e60fe791ce7c5d468b2825a26cba9fc7fc2e147083a7fbfafd89e9733118" exitCode=0 Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.141502 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" event={"ID":"72eac997-6781-4644-8b2f-0031260b6360","Type":"ContainerDied","Data":"7c44e60fe791ce7c5d468b2825a26cba9fc7fc2e147083a7fbfafd89e9733118"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.145181 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.145478 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vk76m" event={"ID":"b633e7ac-4c59-4281-a4e7-243da9f909c5","Type":"ContainerDied","Data":"bb64dac75976a3bad59204965839e4cb95282748e502b5d8da525dcb242688e9"} Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.206688 4809 scope.go:117] "RemoveContainer" containerID="3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.242749 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.252525 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t6mlx"] Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.254050 4809 scope.go:117] "RemoveContainer" containerID="18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e" Feb 26 14:38:06 crc kubenswrapper[4809]: E0226 14:38:06.258269 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e\": container with ID starting with 18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e not found: ID does not exist" containerID="18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.258331 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e"} err="failed to get container status \"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e\": rpc error: code = NotFound desc = could not find container \"18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e\": container with ID starting with 18a8a65afaa720552851f0cc52efd4e7601ed69c218ff5fa1103ddd02ca59a5e not found: ID does not exist" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.258370 4809 scope.go:117] "RemoveContainer" containerID="3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026" Feb 26 14:38:06 crc kubenswrapper[4809]: E0226 14:38:06.259935 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026\": container with ID starting with 3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026 not found: ID does not exist" containerID="3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.259981 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026"} err="failed to get container status \"3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026\": rpc error: code = NotFound desc = could not find container \"3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026\": container with ID starting with 3fcd28fc36ee8808454ff9b9aa0ec9cff69b1f08f0f1598d86300a9f60e27026 not found: ID does not exist" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.260022 4809 scope.go:117] "RemoveContainer" containerID="04ca50d87ab24135e0bf7edebeb36514c948cb55bc9b7cebda8e71fbca448ce8" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.285069 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" path="/var/lib/kubelet/pods/0a0a1623-b934-4e9f-8cb7-0393fa0dcb95/volumes" Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.286030 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.286065 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vk76m"] Feb 26 14:38:06 crc kubenswrapper[4809]: I0226 14:38:06.304294 4809 scope.go:117] "RemoveContainer" containerID="cb1d3736bec2cc70d244631669f19dc6d0653e9f9bdb87630d10d5c7a4c5c5f8" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.073891 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.155740 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" event={"ID":"3ec3502a-e50f-4840-a833-9e97d7649127","Type":"ContainerStarted","Data":"2ac4aad9a38a72b0914ba782af54c44bf5a3aaab2af74c2d3c2207aad8b147d6"} Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.179483 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" podStartSLOduration=4.487263596 podStartE2EDuration="7.179460925s" podCreationTimestamp="2026-02-26 14:38:00 +0000 UTC" firstStartedPulling="2026-02-26 14:38:02.6324969 +0000 UTC m=+1461.105817423" lastFinishedPulling="2026-02-26 14:38:05.324694229 +0000 UTC m=+1463.798014752" observedRunningTime="2026-02-26 14:38:07.173892017 +0000 UTC m=+1465.647212540" watchObservedRunningTime="2026-02-26 14:38:07.179460925 +0000 UTC m=+1465.652781448" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.206695 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-68qnw" podStartSLOduration=4.206673557 podStartE2EDuration="4.206673557s" podCreationTimestamp="2026-02-26 14:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:07.197109526 +0000 UTC m=+1465.670430049" watchObservedRunningTime="2026-02-26 14:38:07.206673557 +0000 UTC m=+1465.679994080" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.713032 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.713468 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 14:38:07 crc kubenswrapper[4809]: I0226 14:38:07.756438 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.173243 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa23d41b-7d65-437d-aabf-afec242b5401" containerID="4083f5c5a82bc4f7ce87948f52bae972188ad54d1b9c8efd42590ccf9611731d" exitCode=0 Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.173308 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerDied","Data":"4083f5c5a82bc4f7ce87948f52bae972188ad54d1b9c8efd42590ccf9611731d"} Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.175245 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" event={"ID":"da116b61-a038-4d71-8e1f-9269df669d13","Type":"ContainerStarted","Data":"2a749e2ca5c1cf2e1666b41ecad6398da397263f1972e510d76698b3cdaa7e92"} Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.175691 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.178044 4809 generic.go:334] "Generic (PLEG): container finished" podID="3ec3502a-e50f-4840-a833-9e97d7649127" containerID="2ac4aad9a38a72b0914ba782af54c44bf5a3aaab2af74c2d3c2207aad8b147d6" exitCode=0 Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.178095 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" event={"ID":"3ec3502a-e50f-4840-a833-9e97d7649127","Type":"ContainerDied","Data":"2ac4aad9a38a72b0914ba782af54c44bf5a3aaab2af74c2d3c2207aad8b147d6"} Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.181940 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" event={"ID":"72eac997-6781-4644-8b2f-0031260b6360","Type":"ContainerStarted","Data":"af6fcd1f5c5b33549045a29051fb0049a9624fb694faaa94d265f2d043f9508b"} Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.181986 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.228671 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.240212 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" podStartSLOduration=5.240194656 podStartE2EDuration="5.240194656s" podCreationTimestamp="2026-02-26 14:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:08.239147496 +0000 UTC m=+1466.712468019" watchObservedRunningTime="2026-02-26 14:38:08.240194656 +0000 UTC m=+1466.713515179" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.261923 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" podStartSLOduration=5.261904692 podStartE2EDuration="5.261904692s" podCreationTimestamp="2026-02-26 14:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:08.256144958 +0000 UTC m=+1466.729465491" watchObservedRunningTime="2026-02-26 14:38:08.261904692 +0000 UTC m=+1466.735225215" Feb 26 14:38:08 crc kubenswrapper[4809]: I0226 14:38:08.275645 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" path="/var/lib/kubelet/pods/b633e7ac-4c59-4281-a4e7-243da9f909c5/volumes" Feb 26 14:38:11 crc kubenswrapper[4809]: I0226 14:38:11.423323 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:11 crc kubenswrapper[4809]: I0226 14:38:11.439445 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxj9\" (UniqueName: \"kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9\") pod \"3ec3502a-e50f-4840-a833-9e97d7649127\" (UID: \"3ec3502a-e50f-4840-a833-9e97d7649127\") " Feb 26 14:38:11 crc kubenswrapper[4809]: I0226 14:38:11.445960 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9" (OuterVolumeSpecName: "kube-api-access-zsxj9") pod "3ec3502a-e50f-4840-a833-9e97d7649127" (UID: "3ec3502a-e50f-4840-a833-9e97d7649127"). InnerVolumeSpecName "kube-api-access-zsxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:11 crc kubenswrapper[4809]: I0226 14:38:11.542002 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxj9\" (UniqueName: \"kubernetes.io/projected/3ec3502a-e50f-4840-a833-9e97d7649127-kube-api-access-zsxj9\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:12 crc kubenswrapper[4809]: I0226 14:38:12.219124 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" event={"ID":"3ec3502a-e50f-4840-a833-9e97d7649127","Type":"ContainerDied","Data":"695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9"} Feb 26 14:38:12 crc kubenswrapper[4809]: I0226 14:38:12.219311 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="695e9ffb30af9a2be08c651808acdf19c667418ad71a837c8aa0384ca35874c9" Feb 26 14:38:12 crc kubenswrapper[4809]: I0226 14:38:12.219182 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535278-hr5gw" Feb 26 14:38:12 crc kubenswrapper[4809]: I0226 14:38:12.508390 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-vwdmk"] Feb 26 14:38:12 crc kubenswrapper[4809]: I0226 14:38:12.520834 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535272-vwdmk"] Feb 26 14:38:13 crc kubenswrapper[4809]: I0226 14:38:13.948664 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.052116 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.109381 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.238992 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="dnsmasq-dns" containerID="cri-o://af6fcd1f5c5b33549045a29051fb0049a9624fb694faaa94d265f2d043f9508b" gracePeriod=10 Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.271903 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5e5bb3-a6a7-4211-bcf4-612414e2f71b" path="/var/lib/kubelet/pods/cd5e5bb3-a6a7-4211-bcf4-612414e2f71b/volumes" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.600544 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:38:14 crc kubenswrapper[4809]: E0226 14:38:14.601119 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec3502a-e50f-4840-a833-9e97d7649127" containerName="oc" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.601137 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec3502a-e50f-4840-a833-9e97d7649127" containerName="oc" Feb 26 14:38:14 crc kubenswrapper[4809]: E0226 14:38:14.601168 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="init" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.601176 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="init" Feb 26 14:38:14 crc kubenswrapper[4809]: E0226 14:38:14.601190 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.601197 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: E0226 14:38:14.601213 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="init" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.601231 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="init" Feb 26 14:38:14 crc kubenswrapper[4809]: E0226 14:38:14.601241 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.601248 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.606360 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b633e7ac-4c59-4281-a4e7-243da9f909c5" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.606410 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec3502a-e50f-4840-a833-9e97d7649127" containerName="oc" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.606425 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0a1623-b934-4e9f-8cb7-0393fa0dcb95" containerName="dnsmasq-dns" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.608039 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.650448 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.708668 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.708746 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.708847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqvsk\" (UniqueName: \"kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.708882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.708906 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.810183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.810277 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.810402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqvsk\" (UniqueName: \"kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.810447 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.810483 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.811503 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.811872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.811947 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.812457 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.831844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqvsk\" (UniqueName: \"kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk\") pod \"dnsmasq-dns-698758b865-gf7ld\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:14 crc kubenswrapper[4809]: I0226 14:38:14.934295 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.252519 4809 generic.go:334] "Generic (PLEG): container finished" podID="72eac997-6781-4644-8b2f-0031260b6360" containerID="af6fcd1f5c5b33549045a29051fb0049a9624fb694faaa94d265f2d043f9508b" exitCode=0 Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.252580 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" event={"ID":"72eac997-6781-4644-8b2f-0031260b6360","Type":"ContainerDied","Data":"af6fcd1f5c5b33549045a29051fb0049a9624fb694faaa94d265f2d043f9508b"} Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.683772 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.692444 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.694626 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.695003 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.700902 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.700910 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sfnmk" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.758891 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.837112 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2dq\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-kube-api-access-hv2dq\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.837573 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-lock\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.837715 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.838048 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-cache\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.838139 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48507eec-5e23-465d-bf31-73a90acd8e73-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.838285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-311c6779-2b1b-4fff-9644-ce5c885af398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311c6779-2b1b-4fff-9644-ce5c885af398\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.945698 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-lock\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.946101 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.946181 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-cache\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.946216 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48507eec-5e23-465d-bf31-73a90acd8e73-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.946267 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-311c6779-2b1b-4fff-9644-ce5c885af398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311c6779-2b1b-4fff-9644-ce5c885af398\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.946385 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2dq\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-kube-api-access-hv2dq\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.950217 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-cache\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: E0226 14:38:15.950321 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:15 crc kubenswrapper[4809]: E0226 14:38:15.950342 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:15 crc kubenswrapper[4809]: E0226 14:38:15.950439 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:16.450422868 +0000 UTC m=+1474.923743391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.957837 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.957890 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-311c6779-2b1b-4fff-9644-ce5c885af398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311c6779-2b1b-4fff-9644-ce5c885af398\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5c072e1a4923e296e2e1c218dc7abf0c562d8d1c1830ce9b76ce3bdd0f59c712/globalmount\"" pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.961501 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/48507eec-5e23-465d-bf31-73a90acd8e73-lock\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.964186 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48507eec-5e23-465d-bf31-73a90acd8e73-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:15 crc kubenswrapper[4809]: I0226 14:38:15.970640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2dq\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-kube-api-access-hv2dq\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.088106 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-f84fv"] Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.091014 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.096044 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.096088 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.096819 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.113533 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-f84fv"] Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.119143 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-311c6779-2b1b-4fff-9644-ce5c885af398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311c6779-2b1b-4fff-9644-ce5c885af398\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.126551 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.261935 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config\") pod \"72eac997-6781-4644-8b2f-0031260b6360\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.262052 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc\") pod \"72eac997-6781-4644-8b2f-0031260b6360\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.262134 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vdm\" (UniqueName: \"kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm\") pod \"72eac997-6781-4644-8b2f-0031260b6360\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.262411 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb\") pod \"72eac997-6781-4644-8b2f-0031260b6360\" (UID: \"72eac997-6781-4644-8b2f-0031260b6360\") " Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.264610 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.264710 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.264784 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszf5\" (UniqueName: \"kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.264978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.265204 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.265269 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.265296 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.276602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm" (OuterVolumeSpecName: "kube-api-access-75vdm") pod "72eac997-6781-4644-8b2f-0031260b6360" (UID: "72eac997-6781-4644-8b2f-0031260b6360"). InnerVolumeSpecName "kube-api-access-75vdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.279836 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.322780 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cjv29" event={"ID":"72eac997-6781-4644-8b2f-0031260b6360","Type":"ContainerDied","Data":"62605a745ac3fac8a22bdacc47803b305d24063f6ce2180f51509a67e96a30c1"} Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.322840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4","Type":"ContainerStarted","Data":"9f4c20a11670eac738b911b0a335c8a32475617aeb301a3d41b9d27dd5345027"} Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.322865 4809 scope.go:117] "RemoveContainer" containerID="af6fcd1f5c5b33549045a29051fb0049a9624fb694faaa94d265f2d043f9508b" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.331465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72eac997-6781-4644-8b2f-0031260b6360" (UID: "72eac997-6781-4644-8b2f-0031260b6360"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.339746 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72eac997-6781-4644-8b2f-0031260b6360" (UID: "72eac997-6781-4644-8b2f-0031260b6360"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.344465 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.360001 4809 scope.go:117] "RemoveContainer" containerID="7c44e60fe791ce7c5d468b2825a26cba9fc7fc2e147083a7fbfafd89e9733118" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.368200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszf5\" (UniqueName: \"kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.368330 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.368393 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.368431 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.369244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.368453 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.369615 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.369703 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.369799 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.370356 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.370376 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vdm\" (UniqueName: \"kubernetes.io/projected/72eac997-6781-4644-8b2f-0031260b6360-kube-api-access-75vdm\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.370562 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.373620 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.375366 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: W0226 14:38:16.377714 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c6c5570_9dfc_4057_bea3_02c1dd09e31f.slice/crio-1f7f20f71d18687ce6f4ceb966547901f04d868837cd7e2d9cb1f40335e9aad5 WatchSource:0}: Error finding container 1f7f20f71d18687ce6f4ceb966547901f04d868837cd7e2d9cb1f40335e9aad5: Status 404 returned error can't find the container with id 1f7f20f71d18687ce6f4ceb966547901f04d868837cd7e2d9cb1f40335e9aad5 Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.402792 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.406596 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.421867 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszf5\" (UniqueName: \"kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5\") pod \"swift-ring-rebalance-f84fv\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.423486 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config" (OuterVolumeSpecName: "config") pod "72eac997-6781-4644-8b2f-0031260b6360" (UID: "72eac997-6781-4644-8b2f-0031260b6360"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.462280 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.472275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.472457 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72eac997-6781-4644-8b2f-0031260b6360-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:16 crc kubenswrapper[4809]: E0226 14:38:16.472543 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:16 crc kubenswrapper[4809]: E0226 14:38:16.472615 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:16 crc kubenswrapper[4809]: E0226 14:38:16.472702 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:17.472681889 +0000 UTC m=+1475.946002442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.704173 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:16 crc kubenswrapper[4809]: I0226 14:38:16.713775 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cjv29"] Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.188086 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-f84fv"] Feb 26 14:38:17 crc kubenswrapper[4809]: W0226 14:38:17.250771 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ec870c5_5d62_422e_bbd4_d130b152e60a.slice/crio-7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788 WatchSource:0}: Error finding container 7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788: Status 404 returned error can't find the container with id 7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788 Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.293521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mctbl" event={"ID":"3807a00a-1120-4344-9a7b-6522b0f3099b","Type":"ContainerStarted","Data":"61570af3980b9deee6234cae0dde9dc7e4ae7070e4afd86183f1da1fc37a0202"} Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.294793 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mctbl" Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.295649 4809 generic.go:334] "Generic (PLEG): container finished" podID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerID="d4c95a4c52b58cc55569078a68c68813c537408cdcf779331fc9cd88e36a5392" exitCode=0 Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.295697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gf7ld" event={"ID":"2c6c5570-9dfc-4057-bea3-02c1dd09e31f","Type":"ContainerDied","Data":"d4c95a4c52b58cc55569078a68c68813c537408cdcf779331fc9cd88e36a5392"} Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.295716 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gf7ld" event={"ID":"2c6c5570-9dfc-4057-bea3-02c1dd09e31f","Type":"ContainerStarted","Data":"1f7f20f71d18687ce6f4ceb966547901f04d868837cd7e2d9cb1f40335e9aad5"} Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.303162 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e7cf46-b165-4cb9-9249-286b9ef0a2c4","Type":"ContainerStarted","Data":"7f6419672797c80539968c3e54dd2b3c52e0dde8d678f2fe8231c97f324fc721"} Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.304061 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f84fv" event={"ID":"9ec870c5-5d62-422e-bbd4-d130b152e60a","Type":"ContainerStarted","Data":"7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788"} Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.344973 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mctbl" podStartSLOduration=12.887151591 podStartE2EDuration="40.344949042s" podCreationTimestamp="2026-02-26 14:37:37 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.243787535 +0000 UTC m=+1446.717108058" lastFinishedPulling="2026-02-26 14:38:15.701584986 +0000 UTC m=+1474.174905509" observedRunningTime="2026-02-26 14:38:17.340341082 +0000 UTC m=+1475.813661605" watchObservedRunningTime="2026-02-26 14:38:17.344949042 +0000 UTC m=+1475.818269565" Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.411316 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.955237093000001 podStartE2EDuration="37.411294275s" podCreationTimestamp="2026-02-26 14:37:40 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.244074793 +0000 UTC m=+1446.717395316" lastFinishedPulling="2026-02-26 14:38:15.700131975 +0000 UTC m=+1474.173452498" observedRunningTime="2026-02-26 14:38:17.404790521 +0000 UTC m=+1475.878111054" watchObservedRunningTime="2026-02-26 14:38:17.411294275 +0000 UTC m=+1475.884614798" Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.511735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:17 crc kubenswrapper[4809]: E0226 14:38:17.511998 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:17 crc kubenswrapper[4809]: E0226 14:38:17.512029 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:17 crc kubenswrapper[4809]: E0226 14:38:17.512075 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:19.512058755 +0000 UTC m=+1477.985379278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:17 crc kubenswrapper[4809]: I0226 14:38:17.586855 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.274684 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72eac997-6781-4644-8b2f-0031260b6360" path="/var/lib/kubelet/pods/72eac997-6781-4644-8b2f-0031260b6360/volumes" Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.318057 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gf7ld" event={"ID":"2c6c5570-9dfc-4057-bea3-02c1dd09e31f","Type":"ContainerStarted","Data":"88f3188a7dd3f9a35c25a3bd3593654bfe6e6136b61982c2204374c3a0e6def9"} Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.318215 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.319538 4809 generic.go:334] "Generic (PLEG): container finished" podID="b25b5c98-b424-41ce-b099-876b266cf2be" containerID="8ed5d36d1f41dbe8e1dcf411a87cdfa674844298f618733b860a727f7f17193c" exitCode=0 Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.320364 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerDied","Data":"8ed5d36d1f41dbe8e1dcf411a87cdfa674844298f618733b860a727f7f17193c"} Feb 26 14:38:18 crc kubenswrapper[4809]: I0226 14:38:18.358714 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-gf7ld" podStartSLOduration=4.35869159 podStartE2EDuration="4.35869159s" podCreationTimestamp="2026-02-26 14:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:18.353384469 +0000 UTC m=+1476.826704982" watchObservedRunningTime="2026-02-26 14:38:18.35869159 +0000 UTC m=+1476.832012113" Feb 26 14:38:19 crc kubenswrapper[4809]: I0226 14:38:19.329470 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerID="829c341aa42c1e2068668ac12a19c004ee604f5897609eb8a0edc4c58cb3d0aa" exitCode=0 Feb 26 14:38:19 crc kubenswrapper[4809]: I0226 14:38:19.329549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerDied","Data":"829c341aa42c1e2068668ac12a19c004ee604f5897609eb8a0edc4c58cb3d0aa"} Feb 26 14:38:19 crc kubenswrapper[4809]: I0226 14:38:19.556743 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:19 crc kubenswrapper[4809]: E0226 14:38:19.557278 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:19 crc kubenswrapper[4809]: E0226 14:38:19.557292 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:19 crc kubenswrapper[4809]: E0226 14:38:19.557329 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:23.557317695 +0000 UTC m=+1482.030638218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.390781 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:38:20 crc kubenswrapper[4809]: E0226 14:38:20.392807 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="init" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.392910 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="init" Feb 26 14:38:20 crc kubenswrapper[4809]: E0226 14:38:20.393118 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="dnsmasq-dns" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.393207 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="dnsmasq-dns" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.393524 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="72eac997-6781-4644-8b2f-0031260b6360" containerName="dnsmasq-dns" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.407569 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.434366 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.479297 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.479391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24cp\" (UniqueName: \"kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.479458 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.581073 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.581414 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24cp\" (UniqueName: \"kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.581563 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.581716 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.582161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.613973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24cp\" (UniqueName: \"kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp\") pod \"redhat-operators-lkxlc\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.637448 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.638262 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 14:38:20 crc kubenswrapper[4809]: I0226 14:38:20.740627 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.080089 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75d699bb66-fpqsn" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerName="console" containerID="cri-o://8b9b8f594287ceec31be0a0d4f5420722377d31d1bdeeb67c217407cb2dd7888" gracePeriod=15 Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.369173 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75d699bb66-fpqsn_3cc121f0-eb4d-4178-bb80-11c1e85e812d/console/0.log" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.369424 4809 generic.go:334] "Generic (PLEG): container finished" podID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerID="8b9b8f594287ceec31be0a0d4f5420722377d31d1bdeeb67c217407cb2dd7888" exitCode=2 Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.369525 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d699bb66-fpqsn" event={"ID":"3cc121f0-eb4d-4178-bb80-11c1e85e812d","Type":"ContainerDied","Data":"8b9b8f594287ceec31be0a0d4f5420722377d31d1bdeeb67c217407cb2dd7888"} Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.423942 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.596148 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.608404 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.615283 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.615563 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-r66x7" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.615703 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.615849 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.627012 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.710695 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-scripts\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711267 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-config\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711334 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711456 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711541 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5hr\" (UniqueName: \"kubernetes.io/projected/e080a660-5ea2-479a-981c-d82d1b547d04-kube-api-access-lz5hr\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.711765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.813893 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.813974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5hr\" (UniqueName: \"kubernetes.io/projected/e080a660-5ea2-479a-981c-d82d1b547d04-kube-api-access-lz5hr\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814061 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-scripts\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814214 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-config\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.814860 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.815144 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-scripts\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.815210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e080a660-5ea2-479a-981c-d82d1b547d04-config\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.822521 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.832157 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.832455 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e080a660-5ea2-479a-981c-d82d1b547d04-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.838519 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5hr\" (UniqueName: \"kubernetes.io/projected/e080a660-5ea2-479a-981c-d82d1b547d04-kube-api-access-lz5hr\") pod \"ovn-northd-0\" (UID: \"e080a660-5ea2-479a-981c-d82d1b547d04\") " pod="openstack/ovn-northd-0" Feb 26 14:38:21 crc kubenswrapper[4809]: I0226 14:38:21.966122 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.398977 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerID="c841726f0c29effa2d9e38f839e1468c20a21a08bf986b3b1775e124fb367a95" exitCode=0 Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.399040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerDied","Data":"c841726f0c29effa2d9e38f839e1468c20a21a08bf986b3b1775e124fb367a95"} Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.403726 4809 generic.go:334] "Generic (PLEG): container finished" podID="94b1d0fc-c81e-40db-a043-fd5992788567" containerID="b5a794cb606575426cb262a59cb8e194a419febe2842acf21e046bbdc5123016" exitCode=0 Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.403801 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerDied","Data":"b5a794cb606575426cb262a59cb8e194a419febe2842acf21e046bbdc5123016"} Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.407733 4809 generic.go:334] "Generic (PLEG): container finished" podID="32357a81-452d-4c32-8ac2-129d23b8c843" containerID="9d4e9f94eba27283b34ce01c7f379079b4b8e5018367754a17160346d861d189" exitCode=0 Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.407801 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerDied","Data":"9d4e9f94eba27283b34ce01c7f379079b4b8e5018367754a17160346d861d189"} Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.411702 4809 generic.go:334] "Generic (PLEG): container finished" podID="f375c9b0-076d-4c28-adde-74405cf866bc" containerID="92e81cc7c063f704ca11a1f2d5e5c240fa2b9fed516b8e5beefe4d1a6fee7d42" exitCode=0 Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.411783 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerDied","Data":"92e81cc7c063f704ca11a1f2d5e5c240fa2b9fed516b8e5beefe4d1a6fee7d42"} Feb 26 14:38:23 crc kubenswrapper[4809]: I0226 14:38:23.600066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:23 crc kubenswrapper[4809]: E0226 14:38:23.603900 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:23 crc kubenswrapper[4809]: E0226 14:38:23.603946 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:23 crc kubenswrapper[4809]: E0226 14:38:23.603992 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:31.603970592 +0000 UTC m=+1490.077291115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:24 crc kubenswrapper[4809]: I0226 14:38:24.935179 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:38:24 crc kubenswrapper[4809]: I0226 14:38:24.990794 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:24 crc kubenswrapper[4809]: I0226 14:38:24.991089 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="dnsmasq-dns" containerID="cri-o://2a749e2ca5c1cf2e1666b41ecad6398da397263f1972e510d76698b3cdaa7e92" gracePeriod=10 Feb 26 14:38:26 crc kubenswrapper[4809]: I0226 14:38:26.454382 4809 generic.go:334] "Generic (PLEG): container finished" podID="da116b61-a038-4d71-8e1f-9269df669d13" containerID="2a749e2ca5c1cf2e1666b41ecad6398da397263f1972e510d76698b3cdaa7e92" exitCode=0 Feb 26 14:38:26 crc kubenswrapper[4809]: I0226 14:38:26.454478 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" event={"ID":"da116b61-a038-4d71-8e1f-9269df669d13","Type":"ContainerDied","Data":"2a749e2ca5c1cf2e1666b41ecad6398da397263f1972e510d76698b3cdaa7e92"} Feb 26 14:38:27 crc kubenswrapper[4809]: I0226 14:38:27.343398 4809 patch_prober.go:28] interesting pod/console-75d699bb66-fpqsn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.93:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 14:38:27 crc kubenswrapper[4809]: I0226 14:38:27.343775 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-75d699bb66-fpqsn" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.93:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.051565 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.702349 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75d699bb66-fpqsn_3cc121f0-eb4d-4178-bb80-11c1e85e812d/console/0.log" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.702773 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.851767 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.851822 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.851852 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4wgd\" (UniqueName: \"kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.851879 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.852048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.852120 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.853029 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.853047 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca" (OuterVolumeSpecName: "service-ca") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.853364 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config\") pod \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\" (UID: \"3cc121f0-eb4d-4178-bb80-11c1e85e812d\") " Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.853890 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.853906 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.854259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config" (OuterVolumeSpecName: "console-config") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.859052 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.859069 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd" (OuterVolumeSpecName: "kube-api-access-b4wgd") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "kube-api-access-b4wgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.859807 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.867711 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3cc121f0-eb4d-4178-bb80-11c1e85e812d" (UID: "3cc121f0-eb4d-4178-bb80-11c1e85e812d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.956403 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.956951 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.956967 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4wgd\" (UniqueName: \"kubernetes.io/projected/3cc121f0-eb4d-4178-bb80-11c1e85e812d-kube-api-access-b4wgd\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.957001 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3cc121f0-eb4d-4178-bb80-11c1e85e812d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:29 crc kubenswrapper[4809]: I0226 14:38:29.957027 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc121f0-eb4d-4178-bb80-11c1e85e812d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.202698 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.264181 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb\") pod \"da116b61-a038-4d71-8e1f-9269df669d13\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.264304 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc\") pod \"da116b61-a038-4d71-8e1f-9269df669d13\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.264430 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjphf\" (UniqueName: \"kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf\") pod \"da116b61-a038-4d71-8e1f-9269df669d13\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.264530 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config\") pod \"da116b61-a038-4d71-8e1f-9269df669d13\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.264592 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb\") pod \"da116b61-a038-4d71-8e1f-9269df669d13\" (UID: \"da116b61-a038-4d71-8e1f-9269df669d13\") " Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.272325 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf" (OuterVolumeSpecName: "kube-api-access-kjphf") pod "da116b61-a038-4d71-8e1f-9269df669d13" (UID: "da116b61-a038-4d71-8e1f-9269df669d13"). InnerVolumeSpecName "kube-api-access-kjphf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.350411 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da116b61-a038-4d71-8e1f-9269df669d13" (UID: "da116b61-a038-4d71-8e1f-9269df669d13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.352847 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config" (OuterVolumeSpecName: "config") pod "da116b61-a038-4d71-8e1f-9269df669d13" (UID: "da116b61-a038-4d71-8e1f-9269df669d13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.368510 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjphf\" (UniqueName: \"kubernetes.io/projected/da116b61-a038-4d71-8e1f-9269df669d13-kube-api-access-kjphf\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.368551 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.368563 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.370598 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da116b61-a038-4d71-8e1f-9269df669d13" (UID: "da116b61-a038-4d71-8e1f-9269df669d13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.401632 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da116b61-a038-4d71-8e1f-9269df669d13" (UID: "da116b61-a038-4d71-8e1f-9269df669d13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.470500 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.470535 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da116b61-a038-4d71-8e1f-9269df669d13-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.471650 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.503264 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerStarted","Data":"76d790f1db18c921022cf16255166546815966fac22cec2d56c9b314d229ae2d"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.505678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerStarted","Data":"f2b800ad84380177eaf55be9aca6cedd3dd84caabaf74ce9e66be19860e8706a"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.505857 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.508987 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75d699bb66-fpqsn_3cc121f0-eb4d-4178-bb80-11c1e85e812d/console/0.log" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.509116 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d699bb66-fpqsn" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.509137 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d699bb66-fpqsn" event={"ID":"3cc121f0-eb4d-4178-bb80-11c1e85e812d","Type":"ContainerDied","Data":"8f1ea5f6fc066079f0a950f0fa748758ee40e72e067f898a4dbc2199d24e5c50"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.509195 4809 scope.go:117] "RemoveContainer" containerID="8b9b8f594287ceec31be0a0d4f5420722377d31d1bdeeb67c217407cb2dd7888" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.521684 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerStarted","Data":"51f722f1046e756005d1581f2bc10ba9953b9f5810eb226613552aaf6604b683"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.522583 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.527436 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e080a660-5ea2-479a-981c-d82d1b547d04","Type":"ContainerStarted","Data":"3cebb7b20a65d06f85f3aa42fd156820503085a685a017bfb101e38cdc0cd17e"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.543037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerStarted","Data":"52aaad8286344eedcba7651c772467a86d9bd7f111e60fee7a9044772efef31a"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.557903 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=50.871562692 podStartE2EDuration="1m0.557871711s" podCreationTimestamp="2026-02-26 14:37:30 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.162609381 +0000 UTC m=+1446.635929904" lastFinishedPulling="2026-02-26 14:37:57.8489184 +0000 UTC m=+1456.322238923" observedRunningTime="2026-02-26 14:38:30.537126722 +0000 UTC m=+1489.010447245" watchObservedRunningTime="2026-02-26 14:38:30.557871711 +0000 UTC m=+1489.031192244" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.558573 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f84fv" event={"ID":"9ec870c5-5d62-422e-bbd4-d130b152e60a","Type":"ContainerStarted","Data":"f3a0e836855271f5d92ccc9360eadf6cf66adc5fce2bd0d79ec774445191e8a7"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.565770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerStarted","Data":"6f833ff09ea76db7e9202047d2b9b7ee2a7139bd4a486382e571e914dc3b411d"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.567558 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.580468 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" event={"ID":"da116b61-a038-4d71-8e1f-9269df669d13","Type":"ContainerDied","Data":"20fe6bf252da0ee31f7bb7246940e0c06468cee62057e44e5de4699619647562"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.580786 4809 scope.go:117] "RemoveContainer" containerID="2a749e2ca5c1cf2e1666b41ecad6398da397263f1972e510d76698b3cdaa7e92" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.580885 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-46pd9" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.599511 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=63.198176023 podStartE2EDuration="1m3.599493122s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.187913939 +0000 UTC m=+1446.661234462" lastFinishedPulling="2026-02-26 14:37:48.589231048 +0000 UTC m=+1447.062551561" observedRunningTime="2026-02-26 14:38:30.598674759 +0000 UTC m=+1489.071995282" watchObservedRunningTime="2026-02-26 14:38:30.599493122 +0000 UTC m=+1489.072813645" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.618212 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerStarted","Data":"977762f79837dcfb02c6d8f1c2230e194433f2dfa838e601df552c7e99fb77e3"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.619865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.622548 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerStarted","Data":"e5a2c9e7dddac045c590e1d78e0b0c618b83ede6a8c91719117a33a1c209adbb"} Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.673385 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=63.257346882 podStartE2EDuration="1m3.673336868s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.192395396 +0000 UTC m=+1446.665715919" lastFinishedPulling="2026-02-26 14:37:48.608385382 +0000 UTC m=+1447.081705905" observedRunningTime="2026-02-26 14:38:30.638337334 +0000 UTC m=+1489.111657857" watchObservedRunningTime="2026-02-26 14:38:30.673336868 +0000 UTC m=+1489.146657391" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.673877 4809 scope.go:117] "RemoveContainer" containerID="c85d9846dd5925d54f7dd45bdc9146fd2af6c1ae7de6969831684f0ceb6518f9" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.710664 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.724738 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75d699bb66-fpqsn"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.766081 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.769917 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=61.130502887 podStartE2EDuration="1m3.769899948s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:45.695088688 +0000 UTC m=+1444.168409211" lastFinishedPulling="2026-02-26 14:37:48.334485749 +0000 UTC m=+1446.807806272" observedRunningTime="2026-02-26 14:38:30.701468476 +0000 UTC m=+1489.174788999" watchObservedRunningTime="2026-02-26 14:38:30.769899948 +0000 UTC m=+1489.243220471" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.793086 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-f84fv" podStartSLOduration=2.191946809 podStartE2EDuration="14.793066125s" podCreationTimestamp="2026-02-26 14:38:16 +0000 UTC" firstStartedPulling="2026-02-26 14:38:17.261308579 +0000 UTC m=+1475.734629102" lastFinishedPulling="2026-02-26 14:38:29.862427895 +0000 UTC m=+1488.335748418" observedRunningTime="2026-02-26 14:38:30.73648826 +0000 UTC m=+1489.209808793" watchObservedRunningTime="2026-02-26 14:38:30.793066125 +0000 UTC m=+1489.266386678" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.823361 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.835668 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-46pd9"] Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.849226 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=53.394568655 podStartE2EDuration="1m3.849206638s" podCreationTimestamp="2026-02-26 14:37:27 +0000 UTC" firstStartedPulling="2026-02-26 14:37:37.829122836 +0000 UTC m=+1436.302443359" lastFinishedPulling="2026-02-26 14:37:48.283760829 +0000 UTC m=+1446.757081342" observedRunningTime="2026-02-26 14:38:30.837840846 +0000 UTC m=+1489.311161379" watchObservedRunningTime="2026-02-26 14:38:30.849206638 +0000 UTC m=+1489.322527161" Feb 26 14:38:30 crc kubenswrapper[4809]: I0226 14:38:30.876513 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=52.210050817 podStartE2EDuration="1m1.876490523s" podCreationTimestamp="2026-02-26 14:37:29 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.187854437 +0000 UTC m=+1446.661174960" lastFinishedPulling="2026-02-26 14:37:57.854294133 +0000 UTC m=+1456.327614666" observedRunningTime="2026-02-26 14:38:30.86441131 +0000 UTC m=+1489.337731833" watchObservedRunningTime="2026-02-26 14:38:30.876490523 +0000 UTC m=+1489.349811046" Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.605200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:31 crc kubenswrapper[4809]: E0226 14:38:31.605413 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 14:38:31 crc kubenswrapper[4809]: E0226 14:38:31.605446 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 14:38:31 crc kubenswrapper[4809]: E0226 14:38:31.605501 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift podName:48507eec-5e23-465d-bf31-73a90acd8e73 nodeName:}" failed. No retries permitted until 2026-02-26 14:38:47.60548304 +0000 UTC m=+1506.078803563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift") pod "swift-storage-0" (UID: "48507eec-5e23-465d-bf31-73a90acd8e73") : configmap "swift-ring-files" not found Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.640095 4809 generic.go:334] "Generic (PLEG): container finished" podID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerID="55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d" exitCode=0 Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.640157 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d"} Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.640180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerStarted","Data":"0eab694ce976e2bd91f427054791d25a3e8c6fb5df0ca617a24c1e6a21b21c3c"} Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.760568 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 14:38:31 crc kubenswrapper[4809]: I0226 14:38:31.760958 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 14:38:32 crc kubenswrapper[4809]: I0226 14:38:32.268867 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" path="/var/lib/kubelet/pods/3cc121f0-eb4d-4178-bb80-11c1e85e812d/volumes" Feb 26 14:38:32 crc kubenswrapper[4809]: I0226 14:38:32.269614 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da116b61-a038-4d71-8e1f-9269df669d13" path="/var/lib/kubelet/pods/da116b61-a038-4d71-8e1f-9269df669d13/volumes" Feb 26 14:38:32 crc kubenswrapper[4809]: I0226 14:38:32.782762 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:38:32 crc kubenswrapper[4809]: I0226 14:38:32.786585 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bld69" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.055296 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mctbl-config-w2g2n"] Feb 26 14:38:33 crc kubenswrapper[4809]: E0226 14:38:33.055965 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerName="console" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.056053 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerName="console" Feb 26 14:38:33 crc kubenswrapper[4809]: E0226 14:38:33.056127 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="dnsmasq-dns" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.056181 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="dnsmasq-dns" Feb 26 14:38:33 crc kubenswrapper[4809]: E0226 14:38:33.056252 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="init" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.056308 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="init" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.056604 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="da116b61-a038-4d71-8e1f-9269df669d13" containerName="dnsmasq-dns" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.056671 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc121f0-eb4d-4178-bb80-11c1e85e812d" containerName="console" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.057396 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.063026 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.069662 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hg92\" (UniqueName: \"kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.069810 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.069839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.069916 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.069945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.070024 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.079555 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mctbl-config-w2g2n"] Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171366 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hg92\" (UniqueName: \"kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.171601 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.172268 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.172586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.172589 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.172623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.177581 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.243766 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hg92\" (UniqueName: \"kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92\") pod \"ovn-controller-mctbl-config-w2g2n\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.388786 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.671573 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e080a660-5ea2-479a-981c-d82d1b547d04","Type":"ContainerStarted","Data":"c8702586a477b94bc426b12f53d736714625c111bd7f4c072d9a7fc75a10b36e"} Feb 26 14:38:33 crc kubenswrapper[4809]: I0226 14:38:33.692189 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerStarted","Data":"89433ebdd7f32fe2659be9b50926034615257f31cb3a9a790ae9d57900669bc5"} Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.026632 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mctbl-config-w2g2n"] Feb 26 14:38:34 crc kubenswrapper[4809]: W0226 14:38:34.030698 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ee5398_5e14_43cd_8ac7_cfc8696ff9f4.slice/crio-5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f WatchSource:0}: Error finding container 5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f: Status 404 returned error can't find the container with id 5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.699769 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e080a660-5ea2-479a-981c-d82d1b547d04","Type":"ContainerStarted","Data":"51eaafae062d4958a53ad4ab1db96745f4be1eed10a01f9ddffd021ed9cc0f15"} Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.700104 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.700592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mctbl-config-w2g2n" event={"ID":"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4","Type":"ContainerStarted","Data":"5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f"} Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.702161 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerStarted","Data":"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59"} Feb 26 14:38:34 crc kubenswrapper[4809]: I0226 14:38:34.735042 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=11.041332289 podStartE2EDuration="13.735024641s" podCreationTimestamp="2026-02-26 14:38:21 +0000 UTC" firstStartedPulling="2026-02-26 14:38:30.483330465 +0000 UTC m=+1488.956650988" lastFinishedPulling="2026-02-26 14:38:33.177022817 +0000 UTC m=+1491.650343340" observedRunningTime="2026-02-26 14:38:34.730809331 +0000 UTC m=+1493.204129854" watchObservedRunningTime="2026-02-26 14:38:34.735024641 +0000 UTC m=+1493.208345164" Feb 26 14:38:37 crc kubenswrapper[4809]: I0226 14:38:37.731930 4809 generic.go:334] "Generic (PLEG): container finished" podID="41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" containerID="57d74010c098aeee06822bac5ed3fd7d4b634fd26a6ae2d59a1cdecac1ffa85c" exitCode=0 Feb 26 14:38:37 crc kubenswrapper[4809]: I0226 14:38:37.731997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mctbl-config-w2g2n" event={"ID":"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4","Type":"ContainerDied","Data":"57d74010c098aeee06822bac5ed3fd7d4b634fd26a6ae2d59a1cdecac1ffa85c"} Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.284811 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.324977 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.325058 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.325112 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hg92\" (UniqueName: \"kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.325137 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.325152 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.325240 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run\") pod \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\" (UID: \"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4\") " Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.327945 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.327982 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.329057 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts" (OuterVolumeSpecName: "scripts") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.329088 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.329110 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run" (OuterVolumeSpecName: "var-run") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.336713 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92" (OuterVolumeSpecName: "kube-api-access-9hg92") pod "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" (UID: "41ee5398-5e14-43cd-8ac7-cfc8696ff9f4"). InnerVolumeSpecName "kube-api-access-9hg92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426848 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hg92\" (UniqueName: \"kubernetes.io/projected/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-kube-api-access-9hg92\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426889 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426898 4809 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426908 4809 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426916 4809 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.426923 4809 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.750454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mctbl-config-w2g2n" event={"ID":"41ee5398-5e14-43cd-8ac7-cfc8696ff9f4","Type":"ContainerDied","Data":"5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f"} Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.750737 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fff6f381b78f6307ee3d14d86845c0e3b1028bb1b33b5f9f28ede284212d56f" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.750479 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mctbl-config-w2g2n" Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.760783 4809 generic.go:334] "Generic (PLEG): container finished" podID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerID="42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59" exitCode=0 Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.760971 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59"} Feb 26 14:38:39 crc kubenswrapper[4809]: I0226 14:38:39.929821 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 14:38:40 crc kubenswrapper[4809]: I0226 14:38:40.043540 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 14:38:40 crc kubenswrapper[4809]: I0226 14:38:40.389904 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 14:38:40 crc kubenswrapper[4809]: I0226 14:38:40.389952 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 14:38:40 crc kubenswrapper[4809]: I0226 14:38:40.416614 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mctbl-config-w2g2n"] Feb 26 14:38:40 crc kubenswrapper[4809]: I0226 14:38:40.424366 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mctbl-config-w2g2n"] Feb 26 14:38:41 crc kubenswrapper[4809]: I0226 14:38:41.794193 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:38:41 crc kubenswrapper[4809]: I0226 14:38:41.794545 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:38:42 crc kubenswrapper[4809]: I0226 14:38:42.276039 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" path="/var/lib/kubelet/pods/41ee5398-5e14-43cd-8ac7-cfc8696ff9f4/volumes" Feb 26 14:38:43 crc kubenswrapper[4809]: I0226 14:38:43.338618 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:38:43 crc kubenswrapper[4809]: I0226 14:38:43.703543 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 14:38:43 crc kubenswrapper[4809]: I0226 14:38:43.828988 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output=< Feb 26 14:38:43 crc kubenswrapper[4809]: wsrep_local_state_comment (Joined) differs from Synced Feb 26 14:38:43 crc kubenswrapper[4809]: > Feb 26 14:38:44 crc kubenswrapper[4809]: I0226 14:38:44.811913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerStarted","Data":"5bcf07ec7aaf57f9f2f82b8666e67920ad9fa957763f47ec51439c024304b6f2"} Feb 26 14:38:44 crc kubenswrapper[4809]: I0226 14:38:44.814452 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerStarted","Data":"cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d"} Feb 26 14:38:44 crc kubenswrapper[4809]: I0226 14:38:44.841915 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.324631647 podStartE2EDuration="1m10.841894645s" podCreationTimestamp="2026-02-26 14:37:34 +0000 UTC" firstStartedPulling="2026-02-26 14:37:48.16926185 +0000 UTC m=+1446.642582373" lastFinishedPulling="2026-02-26 14:38:43.686524848 +0000 UTC m=+1502.159845371" observedRunningTime="2026-02-26 14:38:44.838370655 +0000 UTC m=+1503.311691178" watchObservedRunningTime="2026-02-26 14:38:44.841894645 +0000 UTC m=+1503.315215168" Feb 26 14:38:45 crc kubenswrapper[4809]: I0226 14:38:45.824300 4809 generic.go:334] "Generic (PLEG): container finished" podID="9ec870c5-5d62-422e-bbd4-d130b152e60a" containerID="f3a0e836855271f5d92ccc9360eadf6cf66adc5fce2bd0d79ec774445191e8a7" exitCode=0 Feb 26 14:38:45 crc kubenswrapper[4809]: I0226 14:38:45.824405 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f84fv" event={"ID":"9ec870c5-5d62-422e-bbd4-d130b152e60a","Type":"ContainerDied","Data":"f3a0e836855271f5d92ccc9360eadf6cf66adc5fce2bd0d79ec774445191e8a7"} Feb 26 14:38:45 crc kubenswrapper[4809]: I0226 14:38:45.849492 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkxlc" podStartSLOduration=13.86526468 podStartE2EDuration="25.849472309s" podCreationTimestamp="2026-02-26 14:38:20 +0000 UTC" firstStartedPulling="2026-02-26 14:38:31.943967426 +0000 UTC m=+1490.417287949" lastFinishedPulling="2026-02-26 14:38:43.928175055 +0000 UTC m=+1502.401495578" observedRunningTime="2026-02-26 14:38:44.870394444 +0000 UTC m=+1503.343714967" watchObservedRunningTime="2026-02-26 14:38:45.849472309 +0000 UTC m=+1504.322792832" Feb 26 14:38:45 crc kubenswrapper[4809]: I0226 14:38:45.899974 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.260184 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.405991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406087 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406191 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406328 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406365 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406417 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rszf5\" (UniqueName: \"kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.406465 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf\") pod \"9ec870c5-5d62-422e-bbd4-d130b152e60a\" (UID: \"9ec870c5-5d62-422e-bbd4-d130b152e60a\") " Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.410095 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.410449 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.414201 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5" (OuterVolumeSpecName: "kube-api-access-rszf5") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "kube-api-access-rszf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.418823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.433929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.436645 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts" (OuterVolumeSpecName: "scripts") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.445291 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9ec870c5-5d62-422e-bbd4-d130b152e60a" (UID: "9ec870c5-5d62-422e-bbd4-d130b152e60a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514514 4809 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514566 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514581 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rszf5\" (UniqueName: \"kubernetes.io/projected/9ec870c5-5d62-422e-bbd4-d130b152e60a-kube-api-access-rszf5\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514596 4809 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514612 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ec870c5-5d62-422e-bbd4-d130b152e60a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514623 4809 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9ec870c5-5d62-422e-bbd4-d130b152e60a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.514637 4809 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9ec870c5-5d62-422e-bbd4-d130b152e60a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.518570 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mctbl" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.615928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.622738 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/48507eec-5e23-465d-bf31-73a90acd8e73-etc-swift\") pod \"swift-storage-0\" (UID: \"48507eec-5e23-465d-bf31-73a90acd8e73\") " pod="openstack/swift-storage-0" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.668199 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.859922 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f84fv" event={"ID":"9ec870c5-5d62-422e-bbd4-d130b152e60a","Type":"ContainerDied","Data":"7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788"} Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.860215 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7152d62527756519358cc6879993591a9233bfd03a2a187c9023d863a181e788" Feb 26 14:38:47 crc kubenswrapper[4809]: I0226 14:38:47.860142 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f84fv" Feb 26 14:38:48 crc kubenswrapper[4809]: I0226 14:38:48.314388 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 14:38:48 crc kubenswrapper[4809]: W0226 14:38:48.316660 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48507eec_5e23_465d_bf31_73a90acd8e73.slice/crio-4839e235accfa7a73b0b9e8f35c6f9d1cb9638d4d6e3a32aecb1e2b8eeb90ef0 WatchSource:0}: Error finding container 4839e235accfa7a73b0b9e8f35c6f9d1cb9638d4d6e3a32aecb1e2b8eeb90ef0: Status 404 returned error can't find the container with id 4839e235accfa7a73b0b9e8f35c6f9d1cb9638d4d6e3a32aecb1e2b8eeb90ef0 Feb 26 14:38:48 crc kubenswrapper[4809]: I0226 14:38:48.883520 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"4839e235accfa7a73b0b9e8f35c6f9d1cb9638d4d6e3a32aecb1e2b8eeb90ef0"} Feb 26 14:38:48 crc kubenswrapper[4809]: I0226 14:38:48.989289 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 26 14:38:49 crc kubenswrapper[4809]: I0226 14:38:49.195562 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 14:38:49 crc kubenswrapper[4809]: I0226 14:38:49.238206 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 26 14:38:49 crc kubenswrapper[4809]: I0226 14:38:49.298180 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.507992 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-txbc2"] Feb 26 14:38:50 crc kubenswrapper[4809]: E0226 14:38:50.508932 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" containerName="ovn-config" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.508945 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" containerName="ovn-config" Feb 26 14:38:50 crc kubenswrapper[4809]: E0226 14:38:50.508958 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec870c5-5d62-422e-bbd4-d130b152e60a" containerName="swift-ring-rebalance" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.508966 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec870c5-5d62-422e-bbd4-d130b152e60a" containerName="swift-ring-rebalance" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.509187 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec870c5-5d62-422e-bbd4-d130b152e60a" containerName="swift-ring-rebalance" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.509209 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ee5398-5e14-43cd-8ac7-cfc8696ff9f4" containerName="ovn-config" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.510034 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.513846 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.520736 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-txbc2"] Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.548802 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.626631 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lx9\" (UniqueName: \"kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.627235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.728739 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lx9\" (UniqueName: \"kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.729096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.730736 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.741309 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.741526 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.750234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lx9\" (UniqueName: \"kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9\") pod \"root-account-create-update-txbc2\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.882457 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.899585 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.904784 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.905619 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"fc695a865069628c9aced5e3bbc9f89d726827290afc27a91b8f2862f80e0544"} Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.905654 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"522630667c97905a4f999fea55d7bb56746bd731ccc3ecdf719843d74149b83c"} Feb 26 14:38:50 crc kubenswrapper[4809]: I0226 14:38:50.905672 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"4496d16a2b3670100ff09fd289995c3e62b07dcb0d23089e294c4b31b18ea5a0"} Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.323274 4809 scope.go:117] "RemoveContainer" containerID="6740deae034c8610fe54d751f7fe65ec4314d3121a3f1779f50ad3043b18e020" Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.456660 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-txbc2"] Feb 26 14:38:51 crc kubenswrapper[4809]: W0226 14:38:51.478810 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67dba26b_656d_4a47_b407_bbaf243903a5.slice/crio-fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25 WatchSource:0}: Error finding container fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25: Status 404 returned error can't find the container with id fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25 Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.813321 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:38:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:38:51 crc kubenswrapper[4809]: > Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.916186 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"59f3f784a2b7c66681f71c8d722ec26a75e2783aeb98defe960631a8054ee0b8"} Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.919003 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-txbc2" event={"ID":"67dba26b-656d-4a47-b407-bbaf243903a5","Type":"ContainerStarted","Data":"004eb8f9928a5d772284cd399142c60f678e7b1c2a32077f4b1e07bafc1d1330"} Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.919052 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-txbc2" event={"ID":"67dba26b-656d-4a47-b407-bbaf243903a5","Type":"ContainerStarted","Data":"fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25"} Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.920082 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:51 crc kubenswrapper[4809]: I0226 14:38:51.941908 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-txbc2" podStartSLOduration=1.9418851 podStartE2EDuration="1.9418851s" podCreationTimestamp="2026-02-26 14:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:51.932167504 +0000 UTC m=+1510.405488037" watchObservedRunningTime="2026-02-26 14:38:51.9418851 +0000 UTC m=+1510.415205623" Feb 26 14:38:52 crc kubenswrapper[4809]: I0226 14:38:52.153962 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 14:38:52 crc kubenswrapper[4809]: I0226 14:38:52.927090 4809 generic.go:334] "Generic (PLEG): container finished" podID="67dba26b-656d-4a47-b407-bbaf243903a5" containerID="004eb8f9928a5d772284cd399142c60f678e7b1c2a32077f4b1e07bafc1d1330" exitCode=0 Feb 26 14:38:52 crc kubenswrapper[4809]: I0226 14:38:52.929360 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-txbc2" event={"ID":"67dba26b-656d-4a47-b407-bbaf243903a5","Type":"ContainerDied","Data":"004eb8f9928a5d772284cd399142c60f678e7b1c2a32077f4b1e07bafc1d1330"} Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.146258 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-b5tkr"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.150364 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.185074 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b5tkr"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.241208 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b727-account-create-update-m7lbg"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.242575 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.261374 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b727-account-create-update-m7lbg"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.274519 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.298669 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.298772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrz86\" (UniqueName: \"kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.401235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.401338 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmp4\" (UniqueName: \"kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.401368 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.401458 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrz86\" (UniqueName: \"kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.403343 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.429474 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrz86\" (UniqueName: \"kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86\") pod \"keystone-db-create-b5tkr\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.443940 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cjlv9"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.445673 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.486078 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjlv9"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.496196 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a996-account-create-update-hlkf2"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.502868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.502975 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldmp4\" (UniqueName: \"kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.504209 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.508859 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.509181 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a996-account-create-update-hlkf2"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.517457 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.538597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldmp4\" (UniqueName: \"kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4\") pod \"keystone-b727-account-create-update-m7lbg\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.543328 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.596158 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.607980 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8w6\" (UniqueName: \"kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.608203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fklgq\" (UniqueName: \"kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.608268 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.608294 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.710122 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fklgq\" (UniqueName: \"kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.710188 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.710229 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.710330 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8w6\" (UniqueName: \"kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.711113 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.711295 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.722111 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-7zml8"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.723753 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7zml8" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.733921 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-87d4-account-create-update-zw2lg"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.735799 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.736289 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8w6\" (UniqueName: \"kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6\") pod \"placement-a996-account-create-update-hlkf2\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.742405 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.742706 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fklgq\" (UniqueName: \"kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq\") pod \"placement-db-create-cjlv9\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.744786 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7zml8"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.755960 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-87d4-account-create-update-zw2lg"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.795597 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.822647 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8tcd\" (UniqueName: \"kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.822728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbm9b\" (UniqueName: \"kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.822799 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.822834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.831232 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wxkll"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.834075 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.842805 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.873636 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-88e3-account-create-update-r9tck"] Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.875624 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.886287 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927059 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8tcd\" (UniqueName: \"kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbm9b\" (UniqueName: \"kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927253 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvvw\" (UniqueName: \"kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927562 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.927611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.985967 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbm9b\" (UniqueName: \"kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:53 crc kubenswrapper[4809]: I0226 14:38:53.998397 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8tcd\" (UniqueName: \"kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.028424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts\") pod \"cinder-87d4-account-create-update-zw2lg\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.029044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts\") pod \"heat-db-create-7zml8\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " pod="openstack/heat-db-create-7zml8" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.049396 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7zml8" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.078414 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.127408 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-88e3-account-create-update-r9tck"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.129678 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.129715 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvvw\" (UniqueName: \"kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.129761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4mv\" (UniqueName: \"kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.129883 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.130530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.157973 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"fd55d55a463b4d6d8d42ba81851c5d74ced6c97fdf9a563954ac593758b773f4"} Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.158037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"c17a3de961c4a39b7156afc90e9b697cfd5fa7a622a3267e008c998865d9f667"} Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.176165 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wxkll"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.223373 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvvw\" (UniqueName: \"kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw\") pod \"cinder-db-create-wxkll\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.233377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.233536 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4mv\" (UniqueName: \"kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.284177 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.321360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4mv\" (UniqueName: \"kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv\") pod \"heat-88e3-account-create-update-r9tck\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.332673 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.332717 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8s7lr"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.333614 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="prometheus" containerID="cri-o://e5a2c9e7dddac045c590e1d78e0b0c618b83ede6a8c91719117a33a1c209adbb" gracePeriod=600 Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.333896 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.334160 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="thanos-sidecar" containerID="cri-o://5bcf07ec7aaf57f9f2f82b8666e67920ad9fa957763f47ec51439c024304b6f2" gracePeriod=600 Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.334251 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="config-reloader" containerID="cri-o://89433ebdd7f32fe2659be9b50926034615257f31cb3a9a790ae9d57900669bc5" gracePeriod=600 Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.336291 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.364438 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ae2c-account-create-update-rj9zh"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.365835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.382497 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.387192 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8s7lr"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.400990 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ae2c-account-create-update-rj9zh"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.425104 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-q57fj"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.427316 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.437614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznwx\" (UniqueName: \"kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.437735 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.469393 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-93c6-account-create-update-mkb9z"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.471803 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.485965 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.520970 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wxkll" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541171 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541330 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cblw5\" (UniqueName: \"kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541351 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541415 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznwx\" (UniqueName: \"kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.541449 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb28s\" (UniqueName: \"kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.554817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.555520 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q57fj"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.618967 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznwx\" (UniqueName: \"kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx\") pod \"barbican-db-create-8s7lr\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.642861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.642954 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.643952 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.643030 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cblw5\" (UniqueName: \"kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.644004 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.644092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnm7w\" (UniqueName: \"kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.644127 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb28s\" (UniqueName: \"kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.644892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.655102 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-93c6-account-create-update-mkb9z"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.695003 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cblw5\" (UniqueName: \"kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5\") pod \"barbican-ae2c-account-create-update-rj9zh\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.703583 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb28s\" (UniqueName: \"kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s\") pod \"neutron-db-create-q57fj\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.747587 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.748041 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnm7w\" (UniqueName: \"kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.749922 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.814939 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnm7w\" (UniqueName: \"kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w\") pod \"neutron-93c6-account-create-update-mkb9z\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.815696 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vsnnq"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.817196 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.874588 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vsnnq"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.898632 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-fc96-account-create-update-fp688"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.913353 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.920041 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.933990 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fc96-account-create-update-fp688"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.955920 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqzj\" (UniqueName: \"kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.956007 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.961855 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b727-account-create-update-m7lbg"] Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.966367 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.986772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8s7lr" Feb 26 14:38:54 crc kubenswrapper[4809]: I0226 14:38:54.996631 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q57fj" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.015347 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.073200 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpq4\" (UniqueName: \"kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.073296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqzj\" (UniqueName: \"kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.073347 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.073402 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.075980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.118750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqzj\" (UniqueName: \"kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj\") pod \"mysqld-exporter-openstack-db-create-vsnnq\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.175364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.176295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpq4\" (UniqueName: \"kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.177370 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.197299 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpq4\" (UniqueName: \"kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4\") pod \"mysqld-exporter-fc96-account-create-update-fp688\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207111 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa23d41b-7d65-437d-aabf-afec242b5401" containerID="5bcf07ec7aaf57f9f2f82b8666e67920ad9fa957763f47ec51439c024304b6f2" exitCode=0 Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207148 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa23d41b-7d65-437d-aabf-afec242b5401" containerID="89433ebdd7f32fe2659be9b50926034615257f31cb3a9a790ae9d57900669bc5" exitCode=0 Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207158 4809 generic.go:334] "Generic (PLEG): container finished" podID="fa23d41b-7d65-437d-aabf-afec242b5401" containerID="e5a2c9e7dddac045c590e1d78e0b0c618b83ede6a8c91719117a33a1c209adbb" exitCode=0 Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerDied","Data":"5bcf07ec7aaf57f9f2f82b8666e67920ad9fa957763f47ec51439c024304b6f2"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207269 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerDied","Data":"89433ebdd7f32fe2659be9b50926034615257f31cb3a9a790ae9d57900669bc5"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.207284 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerDied","Data":"e5a2c9e7dddac045c590e1d78e0b0c618b83ede6a8c91719117a33a1c209adbb"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.210873 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"d5cb62ae9fff6dfce3015179f99c56fdd3463f8b0b1c22fe1cc1ecfe76bc36fd"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.210914 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"c67ca618127e915d335d11c39c7a9d0bbead0981a7126d198cfcead03aae7a3a"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.212143 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b727-account-create-update-m7lbg" event={"ID":"8408ab37-9e60-4307-bd8d-1b1d9db3f539","Type":"ContainerStarted","Data":"a75557214264117d7a82cc0f275fa5a9bb7204150616de3737916cdf7dfcadb7"} Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.216110 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b5tkr"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.327559 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.359766 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.529523 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.529683 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.656238 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjlv9"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.700792 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a996-account-create-update-hlkf2"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.702550 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.702766 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.702878 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.702942 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703022 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdpb\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703135 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703175 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45lx9\" (UniqueName: \"kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9\") pod \"67dba26b-656d-4a47-b407-bbaf243903a5\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703345 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703367 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703448 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out\") pod \"fa23d41b-7d65-437d-aabf-afec242b5401\" (UID: \"fa23d41b-7d65-437d-aabf-afec242b5401\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.703499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts\") pod \"67dba26b-656d-4a47-b407-bbaf243903a5\" (UID: \"67dba26b-656d-4a47-b407-bbaf243903a5\") " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.705278 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67dba26b-656d-4a47-b407-bbaf243903a5" (UID: "67dba26b-656d-4a47-b407-bbaf243903a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.705807 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.713576 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.714133 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.714636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.717028 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9" (OuterVolumeSpecName: "kube-api-access-45lx9") pod "67dba26b-656d-4a47-b407-bbaf243903a5" (UID: "67dba26b-656d-4a47-b407-bbaf243903a5"). InnerVolumeSpecName "kube-api-access-45lx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.723633 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb" (OuterVolumeSpecName: "kube-api-access-zrdpb") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "kube-api-access-zrdpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.726689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.732393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out" (OuterVolumeSpecName: "config-out") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.743124 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config" (OuterVolumeSpecName: "config") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.766484 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819841 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67dba26b-656d-4a47-b407-bbaf243903a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819873 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819914 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") on node \"crc\" " Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819931 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819945 4809 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819956 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdpb\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-kube-api-access-zrdpb\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819965 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819976 4809 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa23d41b-7d65-437d-aabf-afec242b5401-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819986 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45lx9\" (UniqueName: \"kubernetes.io/projected/67dba26b-656d-4a47-b407-bbaf243903a5-kube-api-access-45lx9\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.819996 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fa23d41b-7d65-437d-aabf-afec242b5401-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.820006 4809 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa23d41b-7d65-437d-aabf-afec242b5401-config-out\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.830577 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wxkll"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.842987 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-88e3-account-create-update-r9tck"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.860292 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config" (OuterVolumeSpecName: "web-config") pod "fa23d41b-7d65-437d-aabf-afec242b5401" (UID: "fa23d41b-7d65-437d-aabf-afec242b5401"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.861946 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7zml8"] Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.886296 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.886441 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8") on node "crc" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.891167 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-87d4-account-create-update-zw2lg"] Feb 26 14:38:55 crc kubenswrapper[4809]: W0226 14:38:55.901448 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde27bcc6_91a3_4610_9611_0f1d5065b8a7.slice/crio-e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48 WatchSource:0}: Error finding container e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48: Status 404 returned error can't find the container with id e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48 Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.926946 4809 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa23d41b-7d65-437d-aabf-afec242b5401-web-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:55 crc kubenswrapper[4809]: I0226 14:38:55.926978 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.258092 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a996-account-create-update-hlkf2" event={"ID":"70ac330f-10c7-4cf8-8a22-0ad54c655091","Type":"ContainerStarted","Data":"310847742afb45da31139ebe3f52266eff4054c69eb659b25ccf0c9ea77b9d49"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.258393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a996-account-create-update-hlkf2" event={"ID":"70ac330f-10c7-4cf8-8a22-0ad54c655091","Type":"ContainerStarted","Data":"14f1b7e2c21b418e2fbb5a5c2cbf07825c8237d6f28cb5518cd0961e67210522"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.277839 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-txbc2" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.279965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-88e3-account-create-update-r9tck" event={"ID":"388698b9-4d79-4309-94a1-d867b2dd8cdc","Type":"ContainerStarted","Data":"947f82e433c78833aebae05f2ae90830a3a8149043e70e80e3a3320f6cf4d309"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.280202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wxkll" event={"ID":"b6e4ee77-6195-4e59-85b2-ff393dfe933e","Type":"ContainerStarted","Data":"4e6004fda2c9700b99115b6123c2bec5bf56b3262a4953cfda56a7f294483196"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.280312 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-txbc2" event={"ID":"67dba26b-656d-4a47-b407-bbaf243903a5","Type":"ContainerDied","Data":"fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.280388 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7d324a31542f293c1fb0451f24588475720ff4a261dbc3c5f697e790edcf25" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.281881 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7zml8" event={"ID":"de27bcc6-91a3-4610-9611-0f1d5065b8a7","Type":"ContainerStarted","Data":"e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.286833 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9528bca-6e44-425a-8abe-9ecbed0b60d0" containerID="933197a4c7afe8168d9b3cc7c49bd43aa861001d37bdd49b13c11e512ab6feb7" exitCode=0 Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.286919 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b5tkr" event={"ID":"a9528bca-6e44-425a-8abe-9ecbed0b60d0","Type":"ContainerDied","Data":"933197a4c7afe8168d9b3cc7c49bd43aa861001d37bdd49b13c11e512ab6feb7"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.286947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b5tkr" event={"ID":"a9528bca-6e44-425a-8abe-9ecbed0b60d0","Type":"ContainerStarted","Data":"5c218aa55b587dc1ce65ab1345aa8c23ff70e1e9350189753642cc85c861fabc"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.296969 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8s7lr"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.326702 4809 generic.go:334] "Generic (PLEG): container finished" podID="8408ab37-9e60-4307-bd8d-1b1d9db3f539" containerID="ae2629ed7db0eab068b1526f32757812a9ac16cc6106f5d8d08141813780c33d" exitCode=0 Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.326858 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b727-account-create-update-m7lbg" event={"ID":"8408ab37-9e60-4307-bd8d-1b1d9db3f539","Type":"ContainerDied","Data":"ae2629ed7db0eab068b1526f32757812a9ac16cc6106f5d8d08141813780c33d"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.352036 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-93c6-account-create-update-mkb9z"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.358667 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fa23d41b-7d65-437d-aabf-afec242b5401","Type":"ContainerDied","Data":"4dc6a7e4140a3131748613c63d4741a5dfddd89bfc70cf3e88790e27a17f1c74"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.358944 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.363115 4809 scope.go:117] "RemoveContainer" containerID="5bcf07ec7aaf57f9f2f82b8666e67920ad9fa957763f47ec51439c024304b6f2" Feb 26 14:38:56 crc kubenswrapper[4809]: W0226 14:38:56.370701 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc868b6d6_47c7_45db_bfb1_f24b55ce40df.slice/crio-a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab WatchSource:0}: Error finding container a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab: Status 404 returned error can't find the container with id a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.371403 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlv9" event={"ID":"7cda5ba3-0335-4853-a084-c30c335e99ff","Type":"ContainerStarted","Data":"e6757ab60601c78d658ee5bc813f0def24c54db3e5aa9da9128cbb3ad5212f92"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.371445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlv9" event={"ID":"7cda5ba3-0335-4853-a084-c30c335e99ff","Type":"ContainerStarted","Data":"6efa52db587edacd40bef1b0510233d7f147dba816ddd274d055ba2c2ce41114"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.374851 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-87d4-account-create-update-zw2lg" event={"ID":"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a","Type":"ContainerStarted","Data":"66d2142101882c45adea0735d453a70f54f53686baac87ec7c74034974dff4ea"} Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.386356 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ae2c-account-create-update-rj9zh"] Feb 26 14:38:56 crc kubenswrapper[4809]: W0226 14:38:56.386628 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf6ffefd_5f03_430c_a852_5a971a3959a2.slice/crio-23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0 WatchSource:0}: Error finding container 23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0: Status 404 returned error can't find the container with id 23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0 Feb 26 14:38:56 crc kubenswrapper[4809]: W0226 14:38:56.400837 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccf338c_7d94_4016_aa75_1986453f45a4.slice/crio-4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f WatchSource:0}: Error finding container 4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f: Status 404 returned error can't find the container with id 4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.415214 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q57fj"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.426229 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vsnnq"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.434908 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fc96-account-create-update-fp688"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.446715 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a996-account-create-update-hlkf2" podStartSLOduration=3.446694339 podStartE2EDuration="3.446694339s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:56.282760106 +0000 UTC m=+1514.756080629" watchObservedRunningTime="2026-02-26 14:38:56.446694339 +0000 UTC m=+1514.920014852" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.471724 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-cjlv9" podStartSLOduration=3.471700758 podStartE2EDuration="3.471700758s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:56.389324041 +0000 UTC m=+1514.862644564" watchObservedRunningTime="2026-02-26 14:38:56.471700758 +0000 UTC m=+1514.945021281" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.587190 4809 scope.go:117] "RemoveContainer" containerID="89433ebdd7f32fe2659be9b50926034615257f31cb3a9a790ae9d57900669bc5" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.692086 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.727102 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745214 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:56 crc kubenswrapper[4809]: E0226 14:38:56.745747 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="prometheus" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745775 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="prometheus" Feb 26 14:38:56 crc kubenswrapper[4809]: E0226 14:38:56.745803 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="init-config-reloader" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745810 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="init-config-reloader" Feb 26 14:38:56 crc kubenswrapper[4809]: E0226 14:38:56.745832 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="thanos-sidecar" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745839 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="thanos-sidecar" Feb 26 14:38:56 crc kubenswrapper[4809]: E0226 14:38:56.745848 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="config-reloader" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745855 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="config-reloader" Feb 26 14:38:56 crc kubenswrapper[4809]: E0226 14:38:56.745873 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dba26b-656d-4a47-b407-bbaf243903a5" containerName="mariadb-account-create-update" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.745881 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dba26b-656d-4a47-b407-bbaf243903a5" containerName="mariadb-account-create-update" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.746148 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="prometheus" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.746176 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="thanos-sidecar" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.746191 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dba26b-656d-4a47-b407-bbaf243903a5" containerName="mariadb-account-create-update" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.746216 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" containerName="config-reloader" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.748473 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.750852 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.751440 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.751631 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.751700 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.751853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.751964 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.752041 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.752433 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d2lgm" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.764511 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.764749 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.935504 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hc5\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-kube-api-access-54hc5\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.935902 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5b487ff7-ff62-4570-a75c-314514fb7496-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.935945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.935993 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936090 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936198 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936254 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936309 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936377 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936494 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936542 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:56 crc kubenswrapper[4809]: I0226 14:38:56.936576 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hc5\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-kube-api-access-54hc5\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038249 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5b487ff7-ff62-4570-a75c-314514fb7496-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038322 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038397 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038429 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038459 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038544 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.038588 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.039300 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.043492 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.047609 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5b487ff7-ff62-4570-a75c-314514fb7496-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.053260 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.055086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.057675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.058715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.066130 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.075447 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5b487ff7-ff62-4570-a75c-314514fb7496-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.076101 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.103163 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hc5\" (UniqueName: \"kubernetes.io/projected/5b487ff7-ff62-4570-a75c-314514fb7496-kube-api-access-54hc5\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.125976 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5b487ff7-ff62-4570-a75c-314514fb7496-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.207335 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.207388 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d633318c6bbf353b83d49b28dc3a043863b879cab9b57f6fec512583333cd15/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.316064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5396c3-3cd6-419a-bddb-cb2eacb6b9e8\") pod \"prometheus-metric-storage-0\" (UID: \"5b487ff7-ff62-4570-a75c-314514fb7496\") " pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.328168 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fbfbm"] Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.330062 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.356506 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fbfbm"] Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.378777 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.393931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ae2c-account-create-update-rj9zh" event={"ID":"c868b6d6-47c7-45db-bfb1-f24b55ce40df","Type":"ContainerStarted","Data":"a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.400793 4809 generic.go:334] "Generic (PLEG): container finished" podID="70ac330f-10c7-4cf8-8a22-0ad54c655091" containerID="310847742afb45da31139ebe3f52266eff4054c69eb659b25ccf0c9ea77b9d49" exitCode=0 Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.401075 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a996-account-create-update-hlkf2" event={"ID":"70ac330f-10c7-4cf8-8a22-0ad54c655091","Type":"ContainerDied","Data":"310847742afb45da31139ebe3f52266eff4054c69eb659b25ccf0c9ea77b9d49"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.405348 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7zml8" event={"ID":"de27bcc6-91a3-4610-9611-0f1d5065b8a7","Type":"ContainerStarted","Data":"4eb318db63aa29e504a7302a6a54c7b3bc31cb9b09982841b24d4f2423593da1"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.407534 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q57fj" event={"ID":"6ccf338c-7d94-4016-aa75-1986453f45a4","Type":"ContainerStarted","Data":"4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.422094 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4865-account-create-update-qwlm4"] Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.423585 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.431120 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.436447 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93c6-account-create-update-mkb9z" event={"ID":"bf6ffefd-5f03-430c-a852-5a971a3959a2","Type":"ContainerStarted","Data":"32cd197b16e8e3e17556b51755eaee999fa2571c99b868e9f2279bb59d34ac08"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.436519 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93c6-account-create-update-mkb9z" event={"ID":"bf6ffefd-5f03-430c-a852-5a971a3959a2","Type":"ContainerStarted","Data":"23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.440946 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" event={"ID":"15a2cf63-3d00-4de9-ae7e-c6d45402e573","Type":"ContainerStarted","Data":"c10e583565c23ff4b94a937d19cb8e51073698fc165b1d6f90699a5db11e26b9"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.440996 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" event={"ID":"15a2cf63-3d00-4de9-ae7e-c6d45402e573","Type":"ContainerStarted","Data":"c6bf75a837a2939328cf60406f1e044a13aadb53371698632f35788c5719b3ad"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.455265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8s7lr" event={"ID":"84213e71-f500-4e4a-8a0a-123129d86cf4","Type":"ContainerStarted","Data":"ce06c3686d21a4975055825abdae1ed5f64d537743a4ee4d347c5ac609a47dd3"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.455394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8s7lr" event={"ID":"84213e71-f500-4e4a-8a0a-123129d86cf4","Type":"ContainerStarted","Data":"61dc1efeb3c2d19bb9ff622f1b23af94617331df0fcab345db247c4a648b6e2b"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.457103 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4865-account-create-update-qwlm4"] Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.459002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-88e3-account-create-update-r9tck" event={"ID":"388698b9-4d79-4309-94a1-d867b2dd8cdc","Type":"ContainerStarted","Data":"b5b3868eb43bb1ff8325589f40d8f6f78d35d3f037948c9e646a9c9f7b11d32a"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.461237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wxkll" event={"ID":"b6e4ee77-6195-4e59-85b2-ff393dfe933e","Type":"ContainerStarted","Data":"61e7b56a944495b0a5638199c9fcd4f7457b4303bbea0ab1aac9d1b3f606fa5e"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.462563 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5ns\" (UniqueName: \"kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.462683 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.465908 4809 generic.go:334] "Generic (PLEG): container finished" podID="7cda5ba3-0335-4853-a084-c30c335e99ff" containerID="e6757ab60601c78d658ee5bc813f0def24c54db3e5aa9da9128cbb3ad5212f92" exitCode=0 Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.466093 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlv9" event={"ID":"7cda5ba3-0335-4853-a084-c30c335e99ff","Type":"ContainerDied","Data":"e6757ab60601c78d658ee5bc813f0def24c54db3e5aa9da9128cbb3ad5212f92"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.478763 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-87d4-account-create-update-zw2lg" event={"ID":"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a","Type":"ContainerStarted","Data":"05ff486b6a4b70f6a8467d6fd90590cb9aad27c45ea430ede2b95067360de5a7"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.481993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" event={"ID":"6d992047-47b5-4e8f-8b23-9e87ceef8d70","Type":"ContainerStarted","Data":"c1431c2c01217f5135ebf13d025d7ec2d9cd69c64a65c764ce4e39b547fd2571"} Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.489076 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-7zml8" podStartSLOduration=4.489052569 podStartE2EDuration="4.489052569s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.43412119 +0000 UTC m=+1515.907441713" watchObservedRunningTime="2026-02-26 14:38:57.489052569 +0000 UTC m=+1515.962373092" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.517362 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-93c6-account-create-update-mkb9z" podStartSLOduration=3.517345432 podStartE2EDuration="3.517345432s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.467881308 +0000 UTC m=+1515.941201831" watchObservedRunningTime="2026-02-26 14:38:57.517345432 +0000 UTC m=+1515.990665955" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.552574 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" podStartSLOduration=3.55255373 podStartE2EDuration="3.55255373s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.487780533 +0000 UTC m=+1515.961101056" watchObservedRunningTime="2026-02-26 14:38:57.55255373 +0000 UTC m=+1516.025874253" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.557348 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8s7lr" podStartSLOduration=3.557329416 podStartE2EDuration="3.557329416s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.505397013 +0000 UTC m=+1515.978717536" watchObservedRunningTime="2026-02-26 14:38:57.557329416 +0000 UTC m=+1516.030649939" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.567812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.567930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.568247 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxz5\" (UniqueName: \"kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.568389 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5ns\" (UniqueName: \"kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.569641 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.577721 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-88e3-account-create-update-r9tck" podStartSLOduration=4.577703644 podStartE2EDuration="4.577703644s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.523735782 +0000 UTC m=+1515.997056295" watchObservedRunningTime="2026-02-26 14:38:57.577703644 +0000 UTC m=+1516.051024167" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.582094 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-87d4-account-create-update-zw2lg" podStartSLOduration=4.582080738 podStartE2EDuration="4.582080738s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.539762787 +0000 UTC m=+1516.013083330" watchObservedRunningTime="2026-02-26 14:38:57.582080738 +0000 UTC m=+1516.055401261" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.595833 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5ns\" (UniqueName: \"kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns\") pod \"glance-db-create-fbfbm\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.604787 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wxkll" podStartSLOduration=4.6047695520000005 podStartE2EDuration="4.604769552s" podCreationTimestamp="2026-02-26 14:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:57.585412093 +0000 UTC m=+1516.058732626" watchObservedRunningTime="2026-02-26 14:38:57.604769552 +0000 UTC m=+1516.078090075" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.661762 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbfbm" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.671029 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.671145 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxz5\" (UniqueName: \"kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.672712 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.688401 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxz5\" (UniqueName: \"kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5\") pod \"glance-4865-account-create-update-qwlm4\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.757884 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.815310 4809 scope.go:117] "RemoveContainer" containerID="e5a2c9e7dddac045c590e1d78e0b0c618b83ede6a8c91719117a33a1c209adbb" Feb 26 14:38:57 crc kubenswrapper[4809]: I0226 14:38:57.918911 4809 scope.go:117] "RemoveContainer" containerID="4083f5c5a82bc4f7ce87948f52bae972188ad54d1b9c8efd42590ccf9611731d" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.281358 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa23d41b-7d65-437d-aabf-afec242b5401" path="/var/lib/kubelet/pods/fa23d41b-7d65-437d-aabf-afec242b5401/volumes" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.450749 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.479990 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.506360 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts\") pod \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.506549 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrz86\" (UniqueName: \"kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86\") pod \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\" (UID: \"a9528bca-6e44-425a-8abe-9ecbed0b60d0\") " Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.506700 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts\") pod \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.506748 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldmp4\" (UniqueName: \"kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4\") pod \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\" (UID: \"8408ab37-9e60-4307-bd8d-1b1d9db3f539\") " Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.507102 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9528bca-6e44-425a-8abe-9ecbed0b60d0" (UID: "a9528bca-6e44-425a-8abe-9ecbed0b60d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.508488 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8408ab37-9e60-4307-bd8d-1b1d9db3f539" (UID: "8408ab37-9e60-4307-bd8d-1b1d9db3f539"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.511227 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8408ab37-9e60-4307-bd8d-1b1d9db3f539-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.511254 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9528bca-6e44-425a-8abe-9ecbed0b60d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.511365 4809 generic.go:334] "Generic (PLEG): container finished" podID="b6e4ee77-6195-4e59-85b2-ff393dfe933e" containerID="61e7b56a944495b0a5638199c9fcd4f7457b4303bbea0ab1aac9d1b3f606fa5e" exitCode=0 Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.511815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wxkll" event={"ID":"b6e4ee77-6195-4e59-85b2-ff393dfe933e","Type":"ContainerDied","Data":"61e7b56a944495b0a5638199c9fcd4f7457b4303bbea0ab1aac9d1b3f606fa5e"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.516894 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4" (OuterVolumeSpecName: "kube-api-access-ldmp4") pod "8408ab37-9e60-4307-bd8d-1b1d9db3f539" (UID: "8408ab37-9e60-4307-bd8d-1b1d9db3f539"). InnerVolumeSpecName "kube-api-access-ldmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.516996 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86" (OuterVolumeSpecName: "kube-api-access-nrz86") pod "a9528bca-6e44-425a-8abe-9ecbed0b60d0" (UID: "a9528bca-6e44-425a-8abe-9ecbed0b60d0"). InnerVolumeSpecName "kube-api-access-nrz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.519101 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ae2c-account-create-update-rj9zh" event={"ID":"c868b6d6-47c7-45db-bfb1-f24b55ce40df","Type":"ContainerStarted","Data":"dec548d58c17f7a898778ef58a347e6bea17538d48c008511e91f8204e2efb7b"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.522376 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" event={"ID":"6d992047-47b5-4e8f-8b23-9e87ceef8d70","Type":"ContainerStarted","Data":"c608fd969da2349b9945cd8606af19ec50bd74bc663106e69e21a55976eb8b09"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.524892 4809 generic.go:334] "Generic (PLEG): container finished" podID="de27bcc6-91a3-4610-9611-0f1d5065b8a7" containerID="4eb318db63aa29e504a7302a6a54c7b3bc31cb9b09982841b24d4f2423593da1" exitCode=0 Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.524962 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7zml8" event={"ID":"de27bcc6-91a3-4610-9611-0f1d5065b8a7","Type":"ContainerDied","Data":"4eb318db63aa29e504a7302a6a54c7b3bc31cb9b09982841b24d4f2423593da1"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.527159 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b5tkr" event={"ID":"a9528bca-6e44-425a-8abe-9ecbed0b60d0","Type":"ContainerDied","Data":"5c218aa55b587dc1ce65ab1345aa8c23ff70e1e9350189753642cc85c861fabc"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.527171 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b5tkr" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.527186 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c218aa55b587dc1ce65ab1345aa8c23ff70e1e9350189753642cc85c861fabc" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.529438 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b727-account-create-update-m7lbg" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.540965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b727-account-create-update-m7lbg" event={"ID":"8408ab37-9e60-4307-bd8d-1b1d9db3f539","Type":"ContainerDied","Data":"a75557214264117d7a82cc0f275fa5a9bb7204150616de3737916cdf7dfcadb7"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.541041 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75557214264117d7a82cc0f275fa5a9bb7204150616de3737916cdf7dfcadb7" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.543517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q57fj" event={"ID":"6ccf338c-7d94-4016-aa75-1986453f45a4","Type":"ContainerStarted","Data":"74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25"} Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.618009 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrz86\" (UniqueName: \"kubernetes.io/projected/a9528bca-6e44-425a-8abe-9ecbed0b60d0-kube-api-access-nrz86\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.618059 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldmp4\" (UniqueName: \"kubernetes.io/projected/8408ab37-9e60-4307-bd8d-1b1d9db3f539-kube-api-access-ldmp4\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.651820 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" podStartSLOduration=4.651796335 podStartE2EDuration="4.651796335s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:58.559721152 +0000 UTC m=+1517.033041685" watchObservedRunningTime="2026-02-26 14:38:58.651796335 +0000 UTC m=+1517.125116878" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.684155 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ae2c-account-create-update-rj9zh" podStartSLOduration=4.684133863 podStartE2EDuration="4.684133863s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:58.603047782 +0000 UTC m=+1517.076368305" watchObservedRunningTime="2026-02-26 14:38:58.684133863 +0000 UTC m=+1517.157454386" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.688789 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-q57fj" podStartSLOduration=4.688771614 podStartE2EDuration="4.688771614s" podCreationTimestamp="2026-02-26 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:58.621986359 +0000 UTC m=+1517.095306882" watchObservedRunningTime="2026-02-26 14:38:58.688771614 +0000 UTC m=+1517.162092137" Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.773202 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 14:38:58 crc kubenswrapper[4809]: W0226 14:38:58.779619 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b487ff7_ff62_4570_a75c_314514fb7496.slice/crio-9ac4882f0f0cbc30cf8996ad7c2f46f6a383bd65fce9c433293e9023e1024a14 WatchSource:0}: Error finding container 9ac4882f0f0cbc30cf8996ad7c2f46f6a383bd65fce9c433293e9023e1024a14: Status 404 returned error can't find the container with id 9ac4882f0f0cbc30cf8996ad7c2f46f6a383bd65fce9c433293e9023e1024a14 Feb 26 14:38:58 crc kubenswrapper[4809]: I0226 14:38:58.890995 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fbfbm"] Feb 26 14:38:58 crc kubenswrapper[4809]: W0226 14:38:58.893341 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c95b42_bbb4_4348_919d_82e14dccc8b6.slice/crio-000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324 WatchSource:0}: Error finding container 000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324: Status 404 returned error can't find the container with id 000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324 Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.044654 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-txbc2"] Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.062962 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-txbc2"] Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.110304 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-br7cm"] Feb 26 14:38:59 crc kubenswrapper[4809]: E0226 14:38:59.110742 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8408ab37-9e60-4307-bd8d-1b1d9db3f539" containerName="mariadb-account-create-update" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.110755 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8408ab37-9e60-4307-bd8d-1b1d9db3f539" containerName="mariadb-account-create-update" Feb 26 14:38:59 crc kubenswrapper[4809]: E0226 14:38:59.110768 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9528bca-6e44-425a-8abe-9ecbed0b60d0" containerName="mariadb-database-create" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.110774 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9528bca-6e44-425a-8abe-9ecbed0b60d0" containerName="mariadb-database-create" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.110960 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9528bca-6e44-425a-8abe-9ecbed0b60d0" containerName="mariadb-database-create" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.110981 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8408ab37-9e60-4307-bd8d-1b1d9db3f539" containerName="mariadb-account-create-update" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.111685 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.114031 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.133691 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt7kw\" (UniqueName: \"kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.133830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.138353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-br7cm"] Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.212652 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4865-account-create-update-qwlm4"] Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.236466 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.236674 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt7kw\" (UniqueName: \"kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.237619 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.274969 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt7kw\" (UniqueName: \"kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw\") pod \"root-account-create-update-br7cm\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.419706 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.436058 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.439890 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-br7cm" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.441373 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts\") pod \"70ac330f-10c7-4cf8-8a22-0ad54c655091\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.441570 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz8w6\" (UniqueName: \"kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6\") pod \"70ac330f-10c7-4cf8-8a22-0ad54c655091\" (UID: \"70ac330f-10c7-4cf8-8a22-0ad54c655091\") " Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.444862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70ac330f-10c7-4cf8-8a22-0ad54c655091" (UID: "70ac330f-10c7-4cf8-8a22-0ad54c655091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.446683 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6" (OuterVolumeSpecName: "kube-api-access-fz8w6") pod "70ac330f-10c7-4cf8-8a22-0ad54c655091" (UID: "70ac330f-10c7-4cf8-8a22-0ad54c655091"). InnerVolumeSpecName "kube-api-access-fz8w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.545737 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts\") pod \"7cda5ba3-0335-4853-a084-c30c335e99ff\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.545810 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fklgq\" (UniqueName: \"kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq\") pod \"7cda5ba3-0335-4853-a084-c30c335e99ff\" (UID: \"7cda5ba3-0335-4853-a084-c30c335e99ff\") " Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.546607 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ac330f-10c7-4cf8-8a22-0ad54c655091-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.546626 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz8w6\" (UniqueName: \"kubernetes.io/projected/70ac330f-10c7-4cf8-8a22-0ad54c655091-kube-api-access-fz8w6\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.547323 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cda5ba3-0335-4853-a084-c30c335e99ff" (UID: "7cda5ba3-0335-4853-a084-c30c335e99ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.626274 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4865-account-create-update-qwlm4" event={"ID":"27297557-090e-4476-ae2c-266a0bb3fdb6","Type":"ContainerStarted","Data":"2980dfdb09196b29e6c19ce8e86c93508756d5f3bcea5df754b9e7a84c45c82f"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.651673 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerStarted","Data":"9ac4882f0f0cbc30cf8996ad7c2f46f6a383bd65fce9c433293e9023e1024a14"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.655482 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cda5ba3-0335-4853-a084-c30c335e99ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.656470 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq" (OuterVolumeSpecName: "kube-api-access-fklgq") pod "7cda5ba3-0335-4853-a084-c30c335e99ff" (UID: "7cda5ba3-0335-4853-a084-c30c335e99ff"). InnerVolumeSpecName "kube-api-access-fklgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.660441 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlv9" event={"ID":"7cda5ba3-0335-4853-a084-c30c335e99ff","Type":"ContainerDied","Data":"6efa52db587edacd40bef1b0510233d7f147dba816ddd274d055ba2c2ce41114"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.660467 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlv9" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.660488 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6efa52db587edacd40bef1b0510233d7f147dba816ddd274d055ba2c2ce41114" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.676418 4809 generic.go:334] "Generic (PLEG): container finished" podID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" containerID="05ff486b6a4b70f6a8467d6fd90590cb9aad27c45ea430ede2b95067360de5a7" exitCode=0 Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.676713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-87d4-account-create-update-zw2lg" event={"ID":"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a","Type":"ContainerDied","Data":"05ff486b6a4b70f6a8467d6fd90590cb9aad27c45ea430ede2b95067360de5a7"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.691359 4809 generic.go:334] "Generic (PLEG): container finished" podID="84213e71-f500-4e4a-8a0a-123129d86cf4" containerID="ce06c3686d21a4975055825abdae1ed5f64d537743a4ee4d347c5ac609a47dd3" exitCode=0 Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.691536 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8s7lr" event={"ID":"84213e71-f500-4e4a-8a0a-123129d86cf4","Type":"ContainerDied","Data":"ce06c3686d21a4975055825abdae1ed5f64d537743a4ee4d347c5ac609a47dd3"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.732577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a996-account-create-update-hlkf2" event={"ID":"70ac330f-10c7-4cf8-8a22-0ad54c655091","Type":"ContainerDied","Data":"14f1b7e2c21b418e2fbb5a5c2cbf07825c8237d6f28cb5518cd0961e67210522"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.732621 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f1b7e2c21b418e2fbb5a5c2cbf07825c8237d6f28cb5518cd0961e67210522" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.732689 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a996-account-create-update-hlkf2" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.760215 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fklgq\" (UniqueName: \"kubernetes.io/projected/7cda5ba3-0335-4853-a084-c30c335e99ff-kube-api-access-fklgq\") on node \"crc\" DevicePath \"\"" Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.773786 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"1f8fa87278a8f2e2b1c0657933f3ca1c6dc6fa78baac99e2c83d6912e576b861"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.787746 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbfbm" event={"ID":"98c95b42-bbb4-4348-919d-82e14dccc8b6","Type":"ContainerStarted","Data":"ce975f1ea2c4def4a76ccea807f003154aac718656f993c1fa57764a13c8a4ad"} Feb 26 14:38:59 crc kubenswrapper[4809]: I0226 14:38:59.787790 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbfbm" event={"ID":"98c95b42-bbb4-4348-919d-82e14dccc8b6","Type":"ContainerStarted","Data":"000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324"} Feb 26 14:38:59 crc kubenswrapper[4809]: E0226 14:38:59.955614 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccf338c_7d94_4016_aa75_1986453f45a4.slice/crio-conmon-74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cda5ba3_0335_4853_a084_c30c335e99ff.slice/crio-6efa52db587edacd40bef1b0510233d7f147dba816ddd274d055ba2c2ce41114\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cda5ba3_0335_4853_a084_c30c335e99ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccf338c_7d94_4016_aa75_1986453f45a4.slice/crio-74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d992047_47b5_4e8f_8b23_9e87ceef8d70.slice/crio-c608fd969da2349b9945cd8606af19ec50bd74bc663106e69e21a55976eb8b09.scope\": RecentStats: unable to find data in memory cache]" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.299642 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67dba26b-656d-4a47-b407-bbaf243903a5" path="/var/lib/kubelet/pods/67dba26b-656d-4a47-b407-bbaf243903a5/volumes" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.429471 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fbfbm" podStartSLOduration=3.429444212 podStartE2EDuration="3.429444212s" podCreationTimestamp="2026-02-26 14:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:38:59.810421725 +0000 UTC m=+1518.283742248" watchObservedRunningTime="2026-02-26 14:39:00.429444212 +0000 UTC m=+1518.902764735" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.440057 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wxkll" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.440559 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-br7cm"] Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.492196 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7zml8" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.511499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvvw\" (UniqueName: \"kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw\") pod \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.511551 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts\") pod \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\" (UID: \"b6e4ee77-6195-4e59-85b2-ff393dfe933e\") " Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.514242 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6e4ee77-6195-4e59-85b2-ff393dfe933e" (UID: "b6e4ee77-6195-4e59-85b2-ff393dfe933e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.527656 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw" (OuterVolumeSpecName: "kube-api-access-6cvvw") pod "b6e4ee77-6195-4e59-85b2-ff393dfe933e" (UID: "b6e4ee77-6195-4e59-85b2-ff393dfe933e"). InnerVolumeSpecName "kube-api-access-6cvvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.614938 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts\") pod \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.615146 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8tcd\" (UniqueName: \"kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd\") pod \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\" (UID: \"de27bcc6-91a3-4610-9611-0f1d5065b8a7\") " Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.615487 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de27bcc6-91a3-4610-9611-0f1d5065b8a7" (UID: "de27bcc6-91a3-4610-9611-0f1d5065b8a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.615712 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de27bcc6-91a3-4610-9611-0f1d5065b8a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.615983 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvvw\" (UniqueName: \"kubernetes.io/projected/b6e4ee77-6195-4e59-85b2-ff393dfe933e-kube-api-access-6cvvw\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.615993 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6e4ee77-6195-4e59-85b2-ff393dfe933e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.619159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd" (OuterVolumeSpecName: "kube-api-access-p8tcd") pod "de27bcc6-91a3-4610-9611-0f1d5065b8a7" (UID: "de27bcc6-91a3-4610-9611-0f1d5065b8a7"). InnerVolumeSpecName "kube-api-access-p8tcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.718654 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8tcd\" (UniqueName: \"kubernetes.io/projected/de27bcc6-91a3-4610-9611-0f1d5065b8a7-kube-api-access-p8tcd\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.803389 4809 generic.go:334] "Generic (PLEG): container finished" podID="388698b9-4d79-4309-94a1-d867b2dd8cdc" containerID="b5b3868eb43bb1ff8325589f40d8f6f78d35d3f037948c9e646a9c9f7b11d32a" exitCode=0 Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.803559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-88e3-account-create-update-r9tck" event={"ID":"388698b9-4d79-4309-94a1-d867b2dd8cdc","Type":"ContainerDied","Data":"b5b3868eb43bb1ff8325589f40d8f6f78d35d3f037948c9e646a9c9f7b11d32a"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.806167 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wxkll" event={"ID":"b6e4ee77-6195-4e59-85b2-ff393dfe933e","Type":"ContainerDied","Data":"4e6004fda2c9700b99115b6123c2bec5bf56b3262a4953cfda56a7f294483196"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.806243 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6004fda2c9700b99115b6123c2bec5bf56b3262a4953cfda56a7f294483196" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.806308 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wxkll" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.815631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4865-account-create-update-qwlm4" event={"ID":"27297557-090e-4476-ae2c-266a0bb3fdb6","Type":"ContainerStarted","Data":"db56aa97ad19933aaa200d9d9a1d69cccc65324ec4b4bd6b99347587546e80ae"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.817305 4809 generic.go:334] "Generic (PLEG): container finished" podID="6d992047-47b5-4e8f-8b23-9e87ceef8d70" containerID="c608fd969da2349b9945cd8606af19ec50bd74bc663106e69e21a55976eb8b09" exitCode=0 Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.817359 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" event={"ID":"6d992047-47b5-4e8f-8b23-9e87ceef8d70","Type":"ContainerDied","Data":"c608fd969da2349b9945cd8606af19ec50bd74bc663106e69e21a55976eb8b09"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.819698 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7zml8" event={"ID":"de27bcc6-91a3-4610-9611-0f1d5065b8a7","Type":"ContainerDied","Data":"e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.819725 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90a37cbf4a930704137a1c11876ff59f777c48481ae6a88bca0fa209a9eca48" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.819735 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7zml8" Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.831353 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"d96216efb7e8b38dd95b5f08a5a9e575d6b09e681fcec9cedf674363020ed184"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.831396 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"b65ead652d7d49140c8630517eb38f65837c4095963c84eaceda3701598f3ee4"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.831406 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"6584227a31f4c1963d520b74ee928f22bc640b5c8421510ad6fe137c056a668b"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.833178 4809 generic.go:334] "Generic (PLEG): container finished" podID="6ccf338c-7d94-4016-aa75-1986453f45a4" containerID="74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25" exitCode=0 Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.833228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q57fj" event={"ID":"6ccf338c-7d94-4016-aa75-1986453f45a4","Type":"ContainerDied","Data":"74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.834856 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-br7cm" event={"ID":"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d","Type":"ContainerStarted","Data":"478493260f335a9117c4aa7e88a8a4e2074736a6d1e2e2da6352e7cdc789eabd"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.834885 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-br7cm" event={"ID":"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d","Type":"ContainerStarted","Data":"34fb8a5e478a237ca1ff41889566162a09b57854f6f4bc58b16b2130e656660a"} Feb 26 14:39:00 crc kubenswrapper[4809]: I0226 14:39:00.862006 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4865-account-create-update-qwlm4" podStartSLOduration=3.861983637 podStartE2EDuration="3.861983637s" podCreationTimestamp="2026-02-26 14:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:00.84659849 +0000 UTC m=+1519.319919013" watchObservedRunningTime="2026-02-26 14:39:00.861983637 +0000 UTC m=+1519.335304160" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.589091 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.595669 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8s7lr" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.651120 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbm9b\" (UniqueName: \"kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b\") pod \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.651306 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts\") pod \"84213e71-f500-4e4a-8a0a-123129d86cf4\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.651374 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznwx\" (UniqueName: \"kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx\") pod \"84213e71-f500-4e4a-8a0a-123129d86cf4\" (UID: \"84213e71-f500-4e4a-8a0a-123129d86cf4\") " Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.651423 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts\") pod \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\" (UID: \"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a\") " Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.652598 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" (UID: "dbbc3ad8-368d-42a5-ba41-2c89e8b0502a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.656405 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84213e71-f500-4e4a-8a0a-123129d86cf4" (UID: "84213e71-f500-4e4a-8a0a-123129d86cf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.754073 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84213e71-f500-4e4a-8a0a-123129d86cf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.754406 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.849598 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.849589 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-87d4-account-create-update-zw2lg" event={"ID":"dbbc3ad8-368d-42a5-ba41-2c89e8b0502a","Type":"ContainerDied","Data":"66d2142101882c45adea0735d453a70f54f53686baac87ec7c74034974dff4ea"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.849709 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d2142101882c45adea0735d453a70f54f53686baac87ec7c74034974dff4ea" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.852074 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8s7lr" event={"ID":"84213e71-f500-4e4a-8a0a-123129d86cf4","Type":"ContainerDied","Data":"61dc1efeb3c2d19bb9ff622f1b23af94617331df0fcab345db247c4a648b6e2b"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.852110 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61dc1efeb3c2d19bb9ff622f1b23af94617331df0fcab345db247c4a648b6e2b" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.852247 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8s7lr" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.853630 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx" (OuterVolumeSpecName: "kube-api-access-zznwx") pod "84213e71-f500-4e4a-8a0a-123129d86cf4" (UID: "84213e71-f500-4e4a-8a0a-123129d86cf4"). InnerVolumeSpecName "kube-api-access-zznwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.856840 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b" (OuterVolumeSpecName: "kube-api-access-bbm9b") pod "dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" (UID: "dbbc3ad8-368d-42a5-ba41-2c89e8b0502a"). InnerVolumeSpecName "kube-api-access-bbm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.859326 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zznwx\" (UniqueName: \"kubernetes.io/projected/84213e71-f500-4e4a-8a0a-123129d86cf4-kube-api-access-zznwx\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.859364 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbm9b\" (UniqueName: \"kubernetes.io/projected/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a-kube-api-access-bbm9b\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.859924 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"60d9b17da2c60446fd018c5009480ea27de3faa985f5f3e2ada08a193a1a7e09"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.861508 4809 generic.go:334] "Generic (PLEG): container finished" podID="98c95b42-bbb4-4348-919d-82e14dccc8b6" containerID="ce975f1ea2c4def4a76ccea807f003154aac718656f993c1fa57764a13c8a4ad" exitCode=0 Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.861576 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbfbm" event={"ID":"98c95b42-bbb4-4348-919d-82e14dccc8b6","Type":"ContainerDied","Data":"ce975f1ea2c4def4a76ccea807f003154aac718656f993c1fa57764a13c8a4ad"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.867454 4809 generic.go:334] "Generic (PLEG): container finished" podID="bf6ffefd-5f03-430c-a852-5a971a3959a2" containerID="32cd197b16e8e3e17556b51755eaee999fa2571c99b868e9f2279bb59d34ac08" exitCode=0 Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.867994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93c6-account-create-update-mkb9z" event={"ID":"bf6ffefd-5f03-430c-a852-5a971a3959a2","Type":"ContainerDied","Data":"32cd197b16e8e3e17556b51755eaee999fa2571c99b868e9f2279bb59d34ac08"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.874704 4809 generic.go:334] "Generic (PLEG): container finished" podID="15a2cf63-3d00-4de9-ae7e-c6d45402e573" containerID="c10e583565c23ff4b94a937d19cb8e51073698fc165b1d6f90699a5db11e26b9" exitCode=0 Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.874943 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" event={"ID":"15a2cf63-3d00-4de9-ae7e-c6d45402e573","Type":"ContainerDied","Data":"c10e583565c23ff4b94a937d19cb8e51073698fc165b1d6f90699a5db11e26b9"} Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.904531 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-br7cm" podStartSLOduration=2.904516201 podStartE2EDuration="2.904516201s" podCreationTimestamp="2026-02-26 14:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:01.903975456 +0000 UTC m=+1520.377295979" watchObservedRunningTime="2026-02-26 14:39:01.904516201 +0000 UTC m=+1520.377836724" Feb 26 14:39:01 crc kubenswrapper[4809]: I0226 14:39:01.904913 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:01 crc kubenswrapper[4809]: > Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.526521 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q57fj" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.565831 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.573588 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.576826 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb28s\" (UniqueName: \"kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s\") pod \"6ccf338c-7d94-4016-aa75-1986453f45a4\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.576903 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts\") pod \"6ccf338c-7d94-4016-aa75-1986453f45a4\" (UID: \"6ccf338c-7d94-4016-aa75-1986453f45a4\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.579469 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ccf338c-7d94-4016-aa75-1986453f45a4" (UID: "6ccf338c-7d94-4016-aa75-1986453f45a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.586955 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s" (OuterVolumeSpecName: "kube-api-access-qb28s") pod "6ccf338c-7d94-4016-aa75-1986453f45a4" (UID: "6ccf338c-7d94-4016-aa75-1986453f45a4"). InnerVolumeSpecName "kube-api-access-qb28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.678405 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts\") pod \"388698b9-4d79-4309-94a1-d867b2dd8cdc\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.678515 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm4mv\" (UniqueName: \"kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv\") pod \"388698b9-4d79-4309-94a1-d867b2dd8cdc\" (UID: \"388698b9-4d79-4309-94a1-d867b2dd8cdc\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.678558 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqzj\" (UniqueName: \"kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj\") pod \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.678686 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts\") pod \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\" (UID: \"6d992047-47b5-4e8f-8b23-9e87ceef8d70\") " Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.679155 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d992047-47b5-4e8f-8b23-9e87ceef8d70" (UID: "6d992047-47b5-4e8f-8b23-9e87ceef8d70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.679263 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d992047-47b5-4e8f-8b23-9e87ceef8d70-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.679283 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb28s\" (UniqueName: \"kubernetes.io/projected/6ccf338c-7d94-4016-aa75-1986453f45a4-kube-api-access-qb28s\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.679297 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ccf338c-7d94-4016-aa75-1986453f45a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.679486 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "388698b9-4d79-4309-94a1-d867b2dd8cdc" (UID: "388698b9-4d79-4309-94a1-d867b2dd8cdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.682399 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj" (OuterVolumeSpecName: "kube-api-access-shqzj") pod "6d992047-47b5-4e8f-8b23-9e87ceef8d70" (UID: "6d992047-47b5-4e8f-8b23-9e87ceef8d70"). InnerVolumeSpecName "kube-api-access-shqzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.682659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv" (OuterVolumeSpecName: "kube-api-access-cm4mv") pod "388698b9-4d79-4309-94a1-d867b2dd8cdc" (UID: "388698b9-4d79-4309-94a1-d867b2dd8cdc"). InnerVolumeSpecName "kube-api-access-cm4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.781940 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm4mv\" (UniqueName: \"kubernetes.io/projected/388698b9-4d79-4309-94a1-d867b2dd8cdc-kube-api-access-cm4mv\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.782020 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqzj\" (UniqueName: \"kubernetes.io/projected/6d992047-47b5-4e8f-8b23-9e87ceef8d70-kube-api-access-shqzj\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.782035 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/388698b9-4d79-4309-94a1-d867b2dd8cdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.893422 4809 generic.go:334] "Generic (PLEG): container finished" podID="27297557-090e-4476-ae2c-266a0bb3fdb6" containerID="db56aa97ad19933aaa200d9d9a1d69cccc65324ec4b4bd6b99347587546e80ae" exitCode=0 Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.893473 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4865-account-create-update-qwlm4" event={"ID":"27297557-090e-4476-ae2c-266a0bb3fdb6","Type":"ContainerDied","Data":"db56aa97ad19933aaa200d9d9a1d69cccc65324ec4b4bd6b99347587546e80ae"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.897115 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerStarted","Data":"c2ee04021f0a68fcacb377f4d248cdb374c7dec1efe160f0c5b98a43a2ac8469"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.898476 4809 generic.go:334] "Generic (PLEG): container finished" podID="c868b6d6-47c7-45db-bfb1-f24b55ce40df" containerID="dec548d58c17f7a898778ef58a347e6bea17538d48c008511e91f8204e2efb7b" exitCode=0 Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.898528 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ae2c-account-create-update-rj9zh" event={"ID":"c868b6d6-47c7-45db-bfb1-f24b55ce40df","Type":"ContainerDied","Data":"dec548d58c17f7a898778ef58a347e6bea17538d48c008511e91f8204e2efb7b"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.899980 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" event={"ID":"6d992047-47b5-4e8f-8b23-9e87ceef8d70","Type":"ContainerDied","Data":"c1431c2c01217f5135ebf13d025d7ec2d9cd69c64a65c764ce4e39b547fd2571"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.899992 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-vsnnq" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.900006 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1431c2c01217f5135ebf13d025d7ec2d9cd69c64a65c764ce4e39b547fd2571" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.914518 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"9994a7492b440537603e61d2e2bb8094ffd05e25b7a6372f50530ca62a065176"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.914572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"48507eec-5e23-465d-bf31-73a90acd8e73","Type":"ContainerStarted","Data":"b8256f59e7e205cb7be197173afd6972a7d67d8ccabde1d76b8dab7a7395c750"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.919831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q57fj" event={"ID":"6ccf338c-7d94-4016-aa75-1986453f45a4","Type":"ContainerDied","Data":"4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.919884 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2bbd4f0a05671cda25569fd664745a2f4be664938f499133388e8b3fd0122f" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.919996 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q57fj" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.923199 4809 generic.go:334] "Generic (PLEG): container finished" podID="fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" containerID="478493260f335a9117c4aa7e88a8a4e2074736a6d1e2e2da6352e7cdc789eabd" exitCode=0 Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.923262 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-br7cm" event={"ID":"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d","Type":"ContainerDied","Data":"478493260f335a9117c4aa7e88a8a4e2074736a6d1e2e2da6352e7cdc789eabd"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.925833 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-88e3-account-create-update-r9tck" Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.927156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-88e3-account-create-update-r9tck" event={"ID":"388698b9-4d79-4309-94a1-d867b2dd8cdc","Type":"ContainerDied","Data":"947f82e433c78833aebae05f2ae90830a3a8149043e70e80e3a3320f6cf4d309"} Feb 26 14:39:02 crc kubenswrapper[4809]: I0226 14:39:02.927213 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="947f82e433c78833aebae05f2ae90830a3a8149043e70e80e3a3320f6cf4d309" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.102465 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.109893335 podStartE2EDuration="49.102434146s" podCreationTimestamp="2026-02-26 14:38:14 +0000 UTC" firstStartedPulling="2026-02-26 14:38:48.319544214 +0000 UTC m=+1506.792864737" lastFinishedPulling="2026-02-26 14:38:58.312085025 +0000 UTC m=+1516.785405548" observedRunningTime="2026-02-26 14:39:03.057845131 +0000 UTC m=+1521.531165654" watchObservedRunningTime="2026-02-26 14:39:03.102434146 +0000 UTC m=+1521.575754669" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.612345 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.713631 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts\") pod \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.713729 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpq4\" (UniqueName: \"kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4\") pod \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\" (UID: \"15a2cf63-3d00-4de9-ae7e-c6d45402e573\") " Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.714308 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15a2cf63-3d00-4de9-ae7e-c6d45402e573" (UID: "15a2cf63-3d00-4de9-ae7e-c6d45402e573"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.731443 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4" (OuterVolumeSpecName: "kube-api-access-gdpq4") pod "15a2cf63-3d00-4de9-ae7e-c6d45402e573" (UID: "15a2cf63-3d00-4de9-ae7e-c6d45402e573"). InnerVolumeSpecName "kube-api-access-gdpq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.732530 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741384 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cf63-3d00-4de9-ae7e-c6d45402e573" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741423 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cf63-3d00-4de9-ae7e-c6d45402e573" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741439 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de27bcc6-91a3-4610-9611-0f1d5065b8a7" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741445 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="de27bcc6-91a3-4610-9611-0f1d5065b8a7" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741473 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cda5ba3-0335-4853-a084-c30c335e99ff" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741480 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cda5ba3-0335-4853-a084-c30c335e99ff" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741490 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741496 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741515 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388698b9-4d79-4309-94a1-d867b2dd8cdc" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741521 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="388698b9-4d79-4309-94a1-d867b2dd8cdc" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741547 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ac330f-10c7-4cf8-8a22-0ad54c655091" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741553 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ac330f-10c7-4cf8-8a22-0ad54c655091" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741567 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e4ee77-6195-4e59-85b2-ff393dfe933e" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741574 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e4ee77-6195-4e59-85b2-ff393dfe933e" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741589 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccf338c-7d94-4016-aa75-1986453f45a4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741595 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccf338c-7d94-4016-aa75-1986453f45a4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741609 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d992047-47b5-4e8f-8b23-9e87ceef8d70" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741615 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d992047-47b5-4e8f-8b23-9e87ceef8d70" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: E0226 14:39:03.741624 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84213e71-f500-4e4a-8a0a-123129d86cf4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741630 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="84213e71-f500-4e4a-8a0a-123129d86cf4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.741991 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="84213e71-f500-4e4a-8a0a-123129d86cf4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742028 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d992047-47b5-4e8f-8b23-9e87ceef8d70" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742043 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccf338c-7d94-4016-aa75-1986453f45a4" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742054 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ac330f-10c7-4cf8-8a22-0ad54c655091" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742068 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e4ee77-6195-4e59-85b2-ff393dfe933e" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742084 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="388698b9-4d79-4309-94a1-d867b2dd8cdc" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742103 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cda5ba3-0335-4853-a084-c30c335e99ff" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742117 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="de27bcc6-91a3-4610-9611-0f1d5065b8a7" containerName="mariadb-database-create" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742136 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.742146 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a2cf63-3d00-4de9-ae7e-c6d45402e573" containerName="mariadb-account-create-update" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.743495 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.760524 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.780287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.816828 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.816968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.817122 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.817223 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.817265 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbkz\" (UniqueName: \"kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.817710 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.818650 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpq4\" (UniqueName: \"kubernetes.io/projected/15a2cf63-3d00-4de9-ae7e-c6d45402e573-kube-api-access-gdpq4\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.818672 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a2cf63-3d00-4de9-ae7e-c6d45402e573-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.879428 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.884909 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbfbm" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.923763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.923819 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtbkz\" (UniqueName: \"kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.923909 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.924103 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.924153 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.924199 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.925263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.926476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.928323 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.928372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.929337 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.939743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fbfbm" event={"ID":"98c95b42-bbb4-4348-919d-82e14dccc8b6","Type":"ContainerDied","Data":"000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324"} Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.939792 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="000435965df9a4c4df5e67f993c98f21cfd8be4ab3bbde5e3955b221946c0324" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.939860 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fbfbm" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.953227 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-93c6-account-create-update-mkb9z" event={"ID":"bf6ffefd-5f03-430c-a852-5a971a3959a2","Type":"ContainerDied","Data":"23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0"} Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.953273 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23dd1c6d5ce14ca903d6b403405e36d4a1a8db403578b810dd94cd873b587cf0" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.953346 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-93c6-account-create-update-mkb9z" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.965061 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.966580 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtbkz\" (UniqueName: \"kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz\") pod \"dnsmasq-dns-77585f5f8c-9b26n\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.966964 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fc96-account-create-update-fp688" event={"ID":"15a2cf63-3d00-4de9-ae7e-c6d45402e573","Type":"ContainerDied","Data":"c6bf75a837a2939328cf60406f1e044a13aadb53371698632f35788c5719b3ad"} Feb 26 14:39:03 crc kubenswrapper[4809]: I0226 14:39:03.968329 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6bf75a837a2939328cf60406f1e044a13aadb53371698632f35788c5719b3ad" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.026478 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk5ns\" (UniqueName: \"kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns\") pod \"98c95b42-bbb4-4348-919d-82e14dccc8b6\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.026811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnm7w\" (UniqueName: \"kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w\") pod \"bf6ffefd-5f03-430c-a852-5a971a3959a2\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.026944 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts\") pod \"98c95b42-bbb4-4348-919d-82e14dccc8b6\" (UID: \"98c95b42-bbb4-4348-919d-82e14dccc8b6\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.027054 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts\") pod \"bf6ffefd-5f03-430c-a852-5a971a3959a2\" (UID: \"bf6ffefd-5f03-430c-a852-5a971a3959a2\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.028388 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98c95b42-bbb4-4348-919d-82e14dccc8b6" (UID: "98c95b42-bbb4-4348-919d-82e14dccc8b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.028817 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf6ffefd-5f03-430c-a852-5a971a3959a2" (UID: "bf6ffefd-5f03-430c-a852-5a971a3959a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.031319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns" (OuterVolumeSpecName: "kube-api-access-hk5ns") pod "98c95b42-bbb4-4348-919d-82e14dccc8b6" (UID: "98c95b42-bbb4-4348-919d-82e14dccc8b6"). InnerVolumeSpecName "kube-api-access-hk5ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.031499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w" (OuterVolumeSpecName: "kube-api-access-rnm7w") pod "bf6ffefd-5f03-430c-a852-5a971a3959a2" (UID: "bf6ffefd-5f03-430c-a852-5a971a3959a2"). InnerVolumeSpecName "kube-api-access-rnm7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.131397 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c95b42-bbb4-4348-919d-82e14dccc8b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.132533 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf6ffefd-5f03-430c-a852-5a971a3959a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.133256 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk5ns\" (UniqueName: \"kubernetes.io/projected/98c95b42-bbb4-4348-919d-82e14dccc8b6-kube-api-access-hk5ns\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.133309 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnm7w\" (UniqueName: \"kubernetes.io/projected/bf6ffefd-5f03-430c-a852-5a971a3959a2-kube-api-access-rnm7w\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.175334 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.179730 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pdkz9"] Feb 26 14:39:04 crc kubenswrapper[4809]: E0226 14:39:04.180206 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6ffefd-5f03-430c-a852-5a971a3959a2" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.180225 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6ffefd-5f03-430c-a852-5a971a3959a2" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: E0226 14:39:04.180274 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c95b42-bbb4-4348-919d-82e14dccc8b6" containerName="mariadb-database-create" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.180282 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c95b42-bbb4-4348-919d-82e14dccc8b6" containerName="mariadb-database-create" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.180469 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c95b42-bbb4-4348-919d-82e14dccc8b6" containerName="mariadb-database-create" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.180493 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6ffefd-5f03-430c-a852-5a971a3959a2" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.181301 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.184071 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.185537 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wtrbf" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.186107 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.186238 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.209238 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdkz9"] Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.339556 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqff\" (UniqueName: \"kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.339731 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.339934 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.442495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.443096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.443156 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqff\" (UniqueName: \"kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.456811 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.478852 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqff\" (UniqueName: \"kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.479422 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle\") pod \"keystone-db-sync-pdkz9\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.503402 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.523819 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.571148 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.596808 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-br7cm" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.649828 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts\") pod \"27297557-090e-4476-ae2c-266a0bb3fdb6\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.650083 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmxz5\" (UniqueName: \"kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5\") pod \"27297557-090e-4476-ae2c-266a0bb3fdb6\" (UID: \"27297557-090e-4476-ae2c-266a0bb3fdb6\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.650462 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27297557-090e-4476-ae2c-266a0bb3fdb6" (UID: "27297557-090e-4476-ae2c-266a0bb3fdb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.658234 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5" (OuterVolumeSpecName: "kube-api-access-nmxz5") pod "27297557-090e-4476-ae2c-266a0bb3fdb6" (UID: "27297557-090e-4476-ae2c-266a0bb3fdb6"). InnerVolumeSpecName "kube-api-access-nmxz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.752859 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts\") pod \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.753255 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt7kw\" (UniqueName: \"kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw\") pod \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.753385 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cblw5\" (UniqueName: \"kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5\") pod \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\" (UID: \"c868b6d6-47c7-45db-bfb1-f24b55ce40df\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.753431 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts\") pod \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\" (UID: \"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d\") " Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.753611 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c868b6d6-47c7-45db-bfb1-f24b55ce40df" (UID: "c868b6d6-47c7-45db-bfb1-f24b55ce40df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.756827 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5" (OuterVolumeSpecName: "kube-api-access-cblw5") pod "c868b6d6-47c7-45db-bfb1-f24b55ce40df" (UID: "c868b6d6-47c7-45db-bfb1-f24b55ce40df"). InnerVolumeSpecName "kube-api-access-cblw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.757494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" (UID: "fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.757641 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw" (OuterVolumeSpecName: "kube-api-access-gt7kw") pod "fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" (UID: "fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d"). InnerVolumeSpecName "kube-api-access-gt7kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758246 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c868b6d6-47c7-45db-bfb1-f24b55ce40df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758268 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt7kw\" (UniqueName: \"kubernetes.io/projected/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-kube-api-access-gt7kw\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758283 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27297557-090e-4476-ae2c-266a0bb3fdb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758296 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cblw5\" (UniqueName: \"kubernetes.io/projected/c868b6d6-47c7-45db-bfb1-f24b55ce40df-kube-api-access-cblw5\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758308 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.758319 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmxz5\" (UniqueName: \"kubernetes.io/projected/27297557-090e-4476-ae2c-266a0bb3fdb6-kube-api-access-nmxz5\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.907451 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf"] Feb 26 14:39:04 crc kubenswrapper[4809]: E0226 14:39:04.908006 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908039 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: E0226 14:39:04.908067 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c868b6d6-47c7-45db-bfb1-f24b55ce40df" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908075 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c868b6d6-47c7-45db-bfb1-f24b55ce40df" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: E0226 14:39:04.908098 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27297557-090e-4476-ae2c-266a0bb3fdb6" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908106 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="27297557-090e-4476-ae2c-266a0bb3fdb6" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908357 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="27297557-090e-4476-ae2c-266a0bb3fdb6" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908495 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.908515 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c868b6d6-47c7-45db-bfb1-f24b55ce40df" containerName="mariadb-account-create-update" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.909518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.945154 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf"] Feb 26 14:39:04 crc kubenswrapper[4809]: I0226 14:39:04.955770 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.003904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" event={"ID":"91595d30-de54-4cf9-947a-1e9e1b8c411b","Type":"ContainerStarted","Data":"9fa08f83e0682a5f52043cb3ad3f260a3a53c4a50db20fb0f1e69b07443989f7"} Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.006557 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ae2c-account-create-update-rj9zh" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.007897 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ae2c-account-create-update-rj9zh" event={"ID":"c868b6d6-47c7-45db-bfb1-f24b55ce40df","Type":"ContainerDied","Data":"a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab"} Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.007945 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a371b4a29e9ee888cdd9747863d309758549b6698e12e8b3ef5900eee03746ab" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.012241 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-br7cm" event={"ID":"fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d","Type":"ContainerDied","Data":"34fb8a5e478a237ca1ff41889566162a09b57854f6f4bc58b16b2130e656660a"} Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.012279 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34fb8a5e478a237ca1ff41889566162a09b57854f6f4bc58b16b2130e656660a" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.012361 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-br7cm" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.020606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4865-account-create-update-qwlm4" event={"ID":"27297557-090e-4476-ae2c-266a0bb3fdb6","Type":"ContainerDied","Data":"2980dfdb09196b29e6c19ce8e86c93508756d5f3bcea5df754b9e7a84c45c82f"} Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.020673 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2980dfdb09196b29e6c19ce8e86c93508756d5f3bcea5df754b9e7a84c45c82f" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.020760 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4865-account-create-update-qwlm4" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.066514 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.066592 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56rg\" (UniqueName: \"kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.119706 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdkz9"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.154043 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-fbdc-account-create-update-8z2q5"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.156274 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.159539 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.163753 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fbdc-account-create-update-8z2q5"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.168925 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.169045 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56rg\" (UniqueName: \"kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.171213 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.196379 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56rg\" (UniqueName: \"kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg\") pod \"mysqld-exporter-openstack-cell1-db-create-5zxhf\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.263482 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.280539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbsh\" (UniqueName: \"kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.280741 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.382908 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.383185 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbsh\" (UniqueName: \"kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.383823 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.400907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbsh\" (UniqueName: \"kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh\") pod \"mysqld-exporter-fbdc-account-create-update-8z2q5\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.483770 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.559495 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-br7cm"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.570941 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-br7cm"] Feb 26 14:39:05 crc kubenswrapper[4809]: I0226 14:39:05.765887 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf"] Feb 26 14:39:05 crc kubenswrapper[4809]: W0226 14:39:05.767858 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ca006a_76e9_4923_b437_9574f83a33ec.slice/crio-6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795 WatchSource:0}: Error finding container 6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795: Status 404 returned error can't find the container with id 6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795 Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.019224 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fbdc-account-create-update-8z2q5"] Feb 26 14:39:06 crc kubenswrapper[4809]: W0226 14:39:06.024268 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940c7f45_d4db_4915_9e05_b3d6be8cbc8a.slice/crio-5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee WatchSource:0}: Error finding container 5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee: Status 404 returned error can't find the container with id 5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.041076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdkz9" event={"ID":"fe49627e-5430-4a47-b96d-cd756aecfc5c","Type":"ContainerStarted","Data":"a6d38d4614db53d0ee4f2d5975c98c439140379e63902b3c2e6023100de86c0d"} Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.043281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" event={"ID":"08ca006a-76e9-4923-b437-9574f83a33ec","Type":"ContainerStarted","Data":"317e6bc04178519de1bfc2b4221d35eed35e5c95b47a8a05a6eb42d3c2b9e248"} Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.043340 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" event={"ID":"08ca006a-76e9-4923-b437-9574f83a33ec","Type":"ContainerStarted","Data":"6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795"} Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.054642 4809 generic.go:334] "Generic (PLEG): container finished" podID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerID="63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d" exitCode=0 Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.054696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" event={"ID":"91595d30-de54-4cf9-947a-1e9e1b8c411b","Type":"ContainerDied","Data":"63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d"} Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.063814 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" podStartSLOduration=2.063793544 podStartE2EDuration="2.063793544s" podCreationTimestamp="2026-02-26 14:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:06.059851652 +0000 UTC m=+1524.533172175" watchObservedRunningTime="2026-02-26 14:39:06.063793544 +0000 UTC m=+1524.537114067" Feb 26 14:39:06 crc kubenswrapper[4809]: I0226 14:39:06.285471 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d" path="/var/lib/kubelet/pods/fb8f58f2-38aa-49b9-9ec0-b44d89ea7d1d/volumes" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.068682 4809 generic.go:334] "Generic (PLEG): container finished" podID="08ca006a-76e9-4923-b437-9574f83a33ec" containerID="317e6bc04178519de1bfc2b4221d35eed35e5c95b47a8a05a6eb42d3c2b9e248" exitCode=0 Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.068770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" event={"ID":"08ca006a-76e9-4923-b437-9574f83a33ec","Type":"ContainerDied","Data":"317e6bc04178519de1bfc2b4221d35eed35e5c95b47a8a05a6eb42d3c2b9e248"} Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.073350 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" event={"ID":"91595d30-de54-4cf9-947a-1e9e1b8c411b","Type":"ContainerStarted","Data":"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909"} Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.073473 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.076251 4809 generic.go:334] "Generic (PLEG): container finished" podID="940c7f45-d4db-4915-9e05-b3d6be8cbc8a" containerID="e6c0ec8f0111dc82c2daf78471e69a6620bcafa44077622c72efe0d81176524f" exitCode=0 Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.076311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" event={"ID":"940c7f45-d4db-4915-9e05-b3d6be8cbc8a","Type":"ContainerDied","Data":"e6c0ec8f0111dc82c2daf78471e69a6620bcafa44077622c72efe0d81176524f"} Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.076345 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" event={"ID":"940c7f45-d4db-4915-9e05-b3d6be8cbc8a","Type":"ContainerStarted","Data":"5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee"} Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.112201 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" podStartSLOduration=4.112182616 podStartE2EDuration="4.112182616s" podCreationTimestamp="2026-02-26 14:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:07.104307782 +0000 UTC m=+1525.577628325" watchObservedRunningTime="2026-02-26 14:39:07.112182616 +0000 UTC m=+1525.585503139" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.637958 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-x42ls"] Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.639847 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.641762 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4x9gd" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.656509 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x42ls"] Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.657874 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.740528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.740596 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.740652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8q27\" (UniqueName: \"kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.740693 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.842965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.843409 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.843574 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8q27\" (UniqueName: \"kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.843691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.858059 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.862414 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.864376 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8q27\" (UniqueName: \"kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.866824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data\") pod \"glance-db-sync-x42ls\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:07 crc kubenswrapper[4809]: I0226 14:39:07.964383 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x42ls" Feb 26 14:39:08 crc kubenswrapper[4809]: I0226 14:39:08.091059 4809 generic.go:334] "Generic (PLEG): container finished" podID="5b487ff7-ff62-4570-a75c-314514fb7496" containerID="c2ee04021f0a68fcacb377f4d248cdb374c7dec1efe160f0c5b98a43a2ac8469" exitCode=0 Feb 26 14:39:08 crc kubenswrapper[4809]: I0226 14:39:08.091164 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerDied","Data":"c2ee04021f0a68fcacb377f4d248cdb374c7dec1efe160f0c5b98a43a2ac8469"} Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.100655 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z9dt6"] Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.102610 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.105160 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.131361 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z9dt6"] Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.213782 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8vq\" (UniqueName: \"kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.213837 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.315772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8vq\" (UniqueName: \"kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.316110 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.317121 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.337798 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8vq\" (UniqueName: \"kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq\") pod \"root-account-create-update-z9dt6\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:09 crc kubenswrapper[4809]: I0226 14:39:09.430333 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.130518 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" event={"ID":"940c7f45-d4db-4915-9e05-b3d6be8cbc8a","Type":"ContainerDied","Data":"5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee"} Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.131126 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5989c106ec2aafc09495b815baa2073064cc2fbd118eb42ccb19eb5160a6f7ee" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.133942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" event={"ID":"08ca006a-76e9-4923-b437-9574f83a33ec","Type":"ContainerDied","Data":"6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795"} Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.133968 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbded1dbd99762838047b116cfd164541e1ee0c2ad73db9b5a117419b1ec795" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.363488 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.404282 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.479195 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts\") pod \"08ca006a-76e9-4923-b437-9574f83a33ec\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.479474 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts\") pod \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.479654 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlbsh\" (UniqueName: \"kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh\") pod \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\" (UID: \"940c7f45-d4db-4915-9e05-b3d6be8cbc8a\") " Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.479917 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56rg\" (UniqueName: \"kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg\") pod \"08ca006a-76e9-4923-b437-9574f83a33ec\" (UID: \"08ca006a-76e9-4923-b437-9574f83a33ec\") " Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.481028 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08ca006a-76e9-4923-b437-9574f83a33ec" (UID: "08ca006a-76e9-4923-b437-9574f83a33ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.481058 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "940c7f45-d4db-4915-9e05-b3d6be8cbc8a" (UID: "940c7f45-d4db-4915-9e05-b3d6be8cbc8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.486008 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg" (OuterVolumeSpecName: "kube-api-access-r56rg") pod "08ca006a-76e9-4923-b437-9574f83a33ec" (UID: "08ca006a-76e9-4923-b437-9574f83a33ec"). InnerVolumeSpecName "kube-api-access-r56rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.486069 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh" (OuterVolumeSpecName: "kube-api-access-hlbsh") pod "940c7f45-d4db-4915-9e05-b3d6be8cbc8a" (UID: "940c7f45-d4db-4915-9e05-b3d6be8cbc8a"). InnerVolumeSpecName "kube-api-access-hlbsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.532972 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z9dt6"] Feb 26 14:39:11 crc kubenswrapper[4809]: W0226 14:39:11.535476 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07008937_f8ca_403c_b8a1_42a7ae37a501.slice/crio-73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1 WatchSource:0}: Error finding container 73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1: Status 404 returned error can't find the container with id 73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1 Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.582562 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlbsh\" (UniqueName: \"kubernetes.io/projected/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-kube-api-access-hlbsh\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.582593 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56rg\" (UniqueName: \"kubernetes.io/projected/08ca006a-76e9-4923-b437-9574f83a33ec-kube-api-access-r56rg\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.582604 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08ca006a-76e9-4923-b437-9574f83a33ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.582615 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940c7f45-d4db-4915-9e05-b3d6be8cbc8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.721065 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x42ls"] Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.793964 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:39:11 crc kubenswrapper[4809]: I0226 14:39:11.794039 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.145791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdkz9" event={"ID":"fe49627e-5430-4a47-b96d-cd756aecfc5c","Type":"ContainerStarted","Data":"19ab8b037f2b87d4beecc408860cd0fd4ae9e264b40cac919dd97b9734f516de"} Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.147969 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:12 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:12 crc kubenswrapper[4809]: > Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.150435 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerStarted","Data":"24d23d6be712b16dce6836c49f7447bf49e902e97b48278e88bec4a479c4d61b"} Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.154303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x42ls" event={"ID":"8cac2949-71b1-417b-b184-e890f4a309ad","Type":"ContainerStarted","Data":"07aff75bf4616ea1b4c64f9a15894b3b672d872726fef920256108b667604ade"} Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.156917 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fbdc-account-create-update-8z2q5" Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.156915 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z9dt6" event={"ID":"07008937-f8ca-403c-b8a1-42a7ae37a501","Type":"ContainerStarted","Data":"73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1"} Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.161164 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf" Feb 26 14:39:12 crc kubenswrapper[4809]: I0226 14:39:12.176751 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pdkz9" podStartSLOduration=2.172732764 podStartE2EDuration="8.176727577s" podCreationTimestamp="2026-02-26 14:39:04 +0000 UTC" firstStartedPulling="2026-02-26 14:39:05.123265403 +0000 UTC m=+1523.596585926" lastFinishedPulling="2026-02-26 14:39:11.127260206 +0000 UTC m=+1529.600580739" observedRunningTime="2026-02-26 14:39:12.175370339 +0000 UTC m=+1530.648690862" watchObservedRunningTime="2026-02-26 14:39:12.176727577 +0000 UTC m=+1530.650048100" Feb 26 14:39:13 crc kubenswrapper[4809]: I0226 14:39:13.192314 4809 generic.go:334] "Generic (PLEG): container finished" podID="07008937-f8ca-403c-b8a1-42a7ae37a501" containerID="618904a039a8a8b2f4745c4e212aa5556c71173015640daf854ca9e64b9a9ea6" exitCode=0 Feb 26 14:39:13 crc kubenswrapper[4809]: I0226 14:39:13.192429 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z9dt6" event={"ID":"07008937-f8ca-403c-b8a1-42a7ae37a501","Type":"ContainerDied","Data":"618904a039a8a8b2f4745c4e212aa5556c71173015640daf854ca9e64b9a9ea6"} Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.178241 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.311735 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.311958 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-gf7ld" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="dnsmasq-dns" containerID="cri-o://88f3188a7dd3f9a35c25a3bd3593654bfe6e6136b61982c2204374c3a0e6def9" gracePeriod=10 Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.849218 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.935346 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gf7ld" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.970666 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts\") pod \"07008937-f8ca-403c-b8a1-42a7ae37a501\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.971099 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8vq\" (UniqueName: \"kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq\") pod \"07008937-f8ca-403c-b8a1-42a7ae37a501\" (UID: \"07008937-f8ca-403c-b8a1-42a7ae37a501\") " Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.971625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07008937-f8ca-403c-b8a1-42a7ae37a501" (UID: "07008937-f8ca-403c-b8a1-42a7ae37a501"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:14 crc kubenswrapper[4809]: I0226 14:39:14.977645 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq" (OuterVolumeSpecName: "kube-api-access-6q8vq") pod "07008937-f8ca-403c-b8a1-42a7ae37a501" (UID: "07008937-f8ca-403c-b8a1-42a7ae37a501"). InnerVolumeSpecName "kube-api-access-6q8vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.074301 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07008937-f8ca-403c-b8a1-42a7ae37a501-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.074348 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q8vq\" (UniqueName: \"kubernetes.io/projected/07008937-f8ca-403c-b8a1-42a7ae37a501-kube-api-access-6q8vq\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.229163 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z9dt6" event={"ID":"07008937-f8ca-403c-b8a1-42a7ae37a501","Type":"ContainerDied","Data":"73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1"} Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.229252 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73582dd7bf5d51493b272098fbb48738f90de3ff1b162cb5a7834f6115b081b1" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.229177 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z9dt6" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.232211 4809 generic.go:334] "Generic (PLEG): container finished" podID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerID="88f3188a7dd3f9a35c25a3bd3593654bfe6e6136b61982c2204374c3a0e6def9" exitCode=0 Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.232245 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gf7ld" event={"ID":"2c6c5570-9dfc-4057-bea3-02c1dd09e31f","Type":"ContainerDied","Data":"88f3188a7dd3f9a35c25a3bd3593654bfe6e6136b61982c2204374c3a0e6def9"} Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.326972 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:39:15 crc kubenswrapper[4809]: E0226 14:39:15.327416 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07008937-f8ca-403c-b8a1-42a7ae37a501" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327639 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="07008937-f8ca-403c-b8a1-42a7ae37a501" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: E0226 14:39:15.327648 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940c7f45-d4db-4915-9e05-b3d6be8cbc8a" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327654 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="940c7f45-d4db-4915-9e05-b3d6be8cbc8a" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: E0226 14:39:15.327678 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ca006a-76e9-4923-b437-9574f83a33ec" containerName="mariadb-database-create" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327685 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ca006a-76e9-4923-b437-9574f83a33ec" containerName="mariadb-database-create" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327866 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ca006a-76e9-4923-b437-9574f83a33ec" containerName="mariadb-database-create" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327879 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="07008937-f8ca-403c-b8a1-42a7ae37a501" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.327899 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="940c7f45-d4db-4915-9e05-b3d6be8cbc8a" containerName="mariadb-account-create-update" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.328602 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.333152 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.344099 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.389886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.390064 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.390210 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjkd\" (UniqueName: \"kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.492320 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.492408 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.492463 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjkd\" (UniqueName: \"kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.498780 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.502602 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.551836 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjkd\" (UniqueName: \"kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd\") pod \"mysqld-exporter-0\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.658288 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.841707 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.901772 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqvsk\" (UniqueName: \"kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk\") pod \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.901866 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc\") pod \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.902109 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb\") pod \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.902174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config\") pod \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.902315 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb\") pod \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\" (UID: \"2c6c5570-9dfc-4057-bea3-02c1dd09e31f\") " Feb 26 14:39:15 crc kubenswrapper[4809]: I0226 14:39:15.987333 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk" (OuterVolumeSpecName: "kube-api-access-pqvsk") pod "2c6c5570-9dfc-4057-bea3-02c1dd09e31f" (UID: "2c6c5570-9dfc-4057-bea3-02c1dd09e31f"). InnerVolumeSpecName "kube-api-access-pqvsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.008602 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqvsk\" (UniqueName: \"kubernetes.io/projected/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-kube-api-access-pqvsk\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.083466 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c6c5570-9dfc-4057-bea3-02c1dd09e31f" (UID: "2c6c5570-9dfc-4057-bea3-02c1dd09e31f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.110632 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.157274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c6c5570-9dfc-4057-bea3-02c1dd09e31f" (UID: "2c6c5570-9dfc-4057-bea3-02c1dd09e31f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.187381 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c6c5570-9dfc-4057-bea3-02c1dd09e31f" (UID: "2c6c5570-9dfc-4057-bea3-02c1dd09e31f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.200514 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config" (OuterVolumeSpecName: "config") pod "2c6c5570-9dfc-4057-bea3-02c1dd09e31f" (UID: "2c6c5570-9dfc-4057-bea3-02c1dd09e31f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.222919 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.222964 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.222977 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6c5570-9dfc-4057-bea3-02c1dd09e31f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.271815 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gf7ld" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.297517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerStarted","Data":"764a57c330b71075f9fe355832381232f64a791d592722a90a87f67add4e095f"} Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.297565 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5b487ff7-ff62-4570-a75c-314514fb7496","Type":"ContainerStarted","Data":"0a61cfe198117fb597a8339687e9087fbb7275f49797a7bf8dfc460b2022e84c"} Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.297584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gf7ld" event={"ID":"2c6c5570-9dfc-4057-bea3-02c1dd09e31f","Type":"ContainerDied","Data":"1f7f20f71d18687ce6f4ceb966547901f04d868837cd7e2d9cb1f40335e9aad5"} Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.302358 4809 scope.go:117] "RemoveContainer" containerID="88f3188a7dd3f9a35c25a3bd3593654bfe6e6136b61982c2204374c3a0e6def9" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.322314 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.322163777 podStartE2EDuration="20.322163777s" podCreationTimestamp="2026-02-26 14:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:16.303204739 +0000 UTC m=+1534.776525272" watchObservedRunningTime="2026-02-26 14:39:16.322163777 +0000 UTC m=+1534.795484300" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.361766 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.364472 4809 scope.go:117] "RemoveContainer" containerID="d4c95a4c52b58cc55569078a68c68813c537408cdcf779331fc9cd88e36a5392" Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.374224 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:39:16 crc kubenswrapper[4809]: I0226 14:39:16.383992 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gf7ld"] Feb 26 14:39:17 crc kubenswrapper[4809]: I0226 14:39:17.286280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4b3f6c49-8612-45fc-af31-6ff2c2201c2e","Type":"ContainerStarted","Data":"413507516be118ba96a0fb97322b0d02174db3fb26ef85085c473107bbef3376"} Feb 26 14:39:17 crc kubenswrapper[4809]: I0226 14:39:17.379832 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 14:39:18 crc kubenswrapper[4809]: I0226 14:39:18.278570 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" path="/var/lib/kubelet/pods/2c6c5570-9dfc-4057-bea3-02c1dd09e31f/volumes" Feb 26 14:39:20 crc kubenswrapper[4809]: I0226 14:39:20.610229 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z9dt6"] Feb 26 14:39:20 crc kubenswrapper[4809]: I0226 14:39:20.624330 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z9dt6"] Feb 26 14:39:21 crc kubenswrapper[4809]: I0226 14:39:21.800858 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:21 crc kubenswrapper[4809]: > Feb 26 14:39:22 crc kubenswrapper[4809]: I0226 14:39:22.272382 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07008937-f8ca-403c-b8a1-42a7ae37a501" path="/var/lib/kubelet/pods/07008937-f8ca-403c-b8a1-42a7ae37a501/volumes" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.623621 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ztvfb"] Feb 26 14:39:25 crc kubenswrapper[4809]: E0226 14:39:25.624841 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="dnsmasq-dns" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.624862 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="dnsmasq-dns" Feb 26 14:39:25 crc kubenswrapper[4809]: E0226 14:39:25.624909 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="init" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.624919 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="init" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.625232 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6c5570-9dfc-4057-bea3-02c1dd09e31f" containerName="dnsmasq-dns" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.626189 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.646711 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ztvfb"] Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.647955 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.750256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56kq\" (UniqueName: \"kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.750350 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.853682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56kq\" (UniqueName: \"kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.853736 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.854769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.876765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56kq\" (UniqueName: \"kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq\") pod \"root-account-create-update-ztvfb\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:25 crc kubenswrapper[4809]: I0226 14:39:25.978252 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:26 crc kubenswrapper[4809]: I0226 14:39:26.394390 4809 generic.go:334] "Generic (PLEG): container finished" podID="fe49627e-5430-4a47-b96d-cd756aecfc5c" containerID="19ab8b037f2b87d4beecc408860cd0fd4ae9e264b40cac919dd97b9734f516de" exitCode=0 Feb 26 14:39:26 crc kubenswrapper[4809]: I0226 14:39:26.394606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdkz9" event={"ID":"fe49627e-5430-4a47-b96d-cd756aecfc5c","Type":"ContainerDied","Data":"19ab8b037f2b87d4beecc408860cd0fd4ae9e264b40cac919dd97b9734f516de"} Feb 26 14:39:26 crc kubenswrapper[4809]: I0226 14:39:26.506585 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod67dba26b-656d-4a47-b407-bbaf243903a5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod67dba26b-656d-4a47-b407-bbaf243903a5] : Timed out while waiting for systemd to remove kubepods-besteffort-pod67dba26b_656d_4a47_b407_bbaf243903a5.slice" Feb 26 14:39:26 crc kubenswrapper[4809]: I0226 14:39:26.537766 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podfa23d41b-7d65-437d-aabf-afec242b5401"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podfa23d41b-7d65-437d-aabf-afec242b5401] : Timed out while waiting for systemd to remove kubepods-burstable-podfa23d41b_7d65_437d_aabf_afec242b5401.slice" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.182745 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ztvfb"] Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.379504 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.390638 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.409270 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x42ls" event={"ID":"8cac2949-71b1-417b-b184-e890f4a309ad","Type":"ContainerStarted","Data":"0801b838d74337a799a33972e83e803fd41c03dac85dc4c693a1cb6db903f81d"} Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.412679 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4b3f6c49-8612-45fc-af31-6ff2c2201c2e","Type":"ContainerStarted","Data":"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b"} Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.425871 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ztvfb" event={"ID":"d32a25c3-1275-463a-bfca-f7cac13c5048","Type":"ContainerStarted","Data":"23f18511af554514f384b686cc2c5edac94af249eaf6a6c51bf3cfd551224bdc"} Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.426101 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ztvfb" event={"ID":"d32a25c3-1275-463a-bfca-f7cac13c5048","Type":"ContainerStarted","Data":"74a71ef4c8ad23a0f6ebc7caace493dbc5c460091115cbc4495f0260f5c570b2"} Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.433312 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.446738 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-x42ls" podStartSLOduration=5.437354465 podStartE2EDuration="20.446716262s" podCreationTimestamp="2026-02-26 14:39:07 +0000 UTC" firstStartedPulling="2026-02-26 14:39:11.721356126 +0000 UTC m=+1530.194676649" lastFinishedPulling="2026-02-26 14:39:26.730717913 +0000 UTC m=+1545.204038446" observedRunningTime="2026-02-26 14:39:27.438126348 +0000 UTC m=+1545.911446861" watchObservedRunningTime="2026-02-26 14:39:27.446716262 +0000 UTC m=+1545.920036785" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.472406 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.058270427 podStartE2EDuration="12.4723912s" podCreationTimestamp="2026-02-26 14:39:15 +0000 UTC" firstStartedPulling="2026-02-26 14:39:16.331553644 +0000 UTC m=+1534.804874167" lastFinishedPulling="2026-02-26 14:39:26.745674417 +0000 UTC m=+1545.218994940" observedRunningTime="2026-02-26 14:39:27.457311002 +0000 UTC m=+1545.930631525" watchObservedRunningTime="2026-02-26 14:39:27.4723912 +0000 UTC m=+1545.945711713" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.494532 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ztvfb" podStartSLOduration=2.494514908 podStartE2EDuration="2.494514908s" podCreationTimestamp="2026-02-26 14:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:27.493039146 +0000 UTC m=+1545.966359689" watchObservedRunningTime="2026-02-26 14:39:27.494514908 +0000 UTC m=+1545.967835431" Feb 26 14:39:27 crc kubenswrapper[4809]: I0226 14:39:27.936122 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.014071 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data\") pod \"fe49627e-5430-4a47-b96d-cd756aecfc5c\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.014270 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqff\" (UniqueName: \"kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff\") pod \"fe49627e-5430-4a47-b96d-cd756aecfc5c\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.014644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle\") pod \"fe49627e-5430-4a47-b96d-cd756aecfc5c\" (UID: \"fe49627e-5430-4a47-b96d-cd756aecfc5c\") " Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.044573 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff" (OuterVolumeSpecName: "kube-api-access-7rqff") pod "fe49627e-5430-4a47-b96d-cd756aecfc5c" (UID: "fe49627e-5430-4a47-b96d-cd756aecfc5c"). InnerVolumeSpecName "kube-api-access-7rqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.049972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe49627e-5430-4a47-b96d-cd756aecfc5c" (UID: "fe49627e-5430-4a47-b96d-cd756aecfc5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.097296 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data" (OuterVolumeSpecName: "config-data") pod "fe49627e-5430-4a47-b96d-cd756aecfc5c" (UID: "fe49627e-5430-4a47-b96d-cd756aecfc5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.117451 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqff\" (UniqueName: \"kubernetes.io/projected/fe49627e-5430-4a47-b96d-cd756aecfc5c-kube-api-access-7rqff\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.117494 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.117511 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe49627e-5430-4a47-b96d-cd756aecfc5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.438366 4809 generic.go:334] "Generic (PLEG): container finished" podID="d32a25c3-1275-463a-bfca-f7cac13c5048" containerID="23f18511af554514f384b686cc2c5edac94af249eaf6a6c51bf3cfd551224bdc" exitCode=0 Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.438458 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ztvfb" event={"ID":"d32a25c3-1275-463a-bfca-f7cac13c5048","Type":"ContainerDied","Data":"23f18511af554514f384b686cc2c5edac94af249eaf6a6c51bf3cfd551224bdc"} Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.444901 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdkz9" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.445521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdkz9" event={"ID":"fe49627e-5430-4a47-b96d-cd756aecfc5c","Type":"ContainerDied","Data":"a6d38d4614db53d0ee4f2d5975c98c439140379e63902b3c2e6023100de86c0d"} Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.445547 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d38d4614db53d0ee4f2d5975c98c439140379e63902b3c2e6023100de86c0d" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.684910 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:28 crc kubenswrapper[4809]: E0226 14:39:28.685517 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe49627e-5430-4a47-b96d-cd756aecfc5c" containerName="keystone-db-sync" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.685539 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe49627e-5430-4a47-b96d-cd756aecfc5c" containerName="keystone-db-sync" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.685793 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe49627e-5430-4a47-b96d-cd756aecfc5c" containerName="keystone-db-sync" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.687130 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.722991 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734566 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734687 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734820 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.734887 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8s7\" (UniqueName: \"kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.777071 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dxfnd"] Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.778728 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.809967 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.810220 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.810841 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.820683 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dxfnd"] Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.823578 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wtrbf" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.830262 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.837980 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838086 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838130 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56df\" (UniqueName: \"kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838217 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.838232 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.840057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.840114 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.840222 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8s7\" (UniqueName: \"kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.840278 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.840405 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.841535 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.842343 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.843559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.852723 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.852758 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.902905 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8s7\" (UniqueName: \"kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7\") pod \"dnsmasq-dns-55fff446b9-ktq5c\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.937121 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-pph48"] Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942278 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942347 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942383 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56df\" (UniqueName: \"kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942440 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.942465 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.944616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-pph48" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.953684 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.954795 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.955657 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.956205 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.960605 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.978093 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-pph48"] Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.978309 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 14:39:28 crc kubenswrapper[4809]: I0226 14:39:28.978364 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hnpcv" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.010587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56df\" (UniqueName: \"kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df\") pod \"keystone-bootstrap-dxfnd\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.011435 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.049528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.049664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjmb\" (UniqueName: \"kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.049716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.129509 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.153275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.153358 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjmb\" (UniqueName: \"kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.153403 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.177257 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.197972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.229725 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjmb\" (UniqueName: \"kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb\") pod \"heat-db-sync-pph48\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.269362 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-pph48" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.297076 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b89wr"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.298807 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.320653 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qqnbq" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.322150 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.324978 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.363598 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92sm\" (UniqueName: \"kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.363982 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.364041 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.364222 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.364292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.364341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.397907 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b89wr"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466467 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466614 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466650 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.466841 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92sm\" (UniqueName: \"kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.471929 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.475372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.482406 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.487038 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.495742 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b7cnn"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.497507 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.512732 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.513169 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.513855 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.514561 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rmtfm" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.518433 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92sm\" (UniqueName: \"kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm\") pod \"cinder-db-sync-b89wr\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.568904 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.569391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.569599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.592673 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.636255 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b7cnn"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.648988 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b89wr" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.678218 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.678443 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.678701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.685094 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.689889 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.723601 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd\") pod \"neutron-db-sync-b7cnn\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.770119 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g49c6"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.774807 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.784862 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xkmwc" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.785561 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.809149 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-sdgpk"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.810741 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.815443 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.815765 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kll8r" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.815991 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.836176 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sdgpk"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.851272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.858597 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.872544 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.890864 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.891986 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.892489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5549\" (UniqueName: \"kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.893117 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.893306 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.893455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.893527 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvd5\" (UniqueName: \"kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.898648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.918408 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g49c6"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.937575 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.961638 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.966389 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.971870 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.972394 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:39:29 crc kubenswrapper[4809]: I0226 14:39:29.972714 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvd5\" (UniqueName: \"kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001667 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001752 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001827 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001878 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001937 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.001970 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002102 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mfzj\" (UniqueName: \"kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002214 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5549\" (UniqueName: \"kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002256 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002325 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfz9q\" (UniqueName: \"kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002371 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002428 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.002538 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.004993 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.013053 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.014590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.015999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.040537 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.043178 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.050849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvd5\" (UniqueName: \"kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5\") pod \"barbican-db-sync-g49c6\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.058564 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5549\" (UniqueName: \"kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549\") pod \"placement-db-sync-sdgpk\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.083398 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104443 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mfzj\" (UniqueName: \"kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104482 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfz9q\" (UniqueName: \"kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.104953 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.111064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.114214 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127549 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127612 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127775 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49c6" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127877 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.127957 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.128039 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.132526 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.132850 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.134659 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.135727 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.136726 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.136954 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.139557 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.144373 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.145853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mfzj\" (UniqueName: \"kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.146459 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sdgpk" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.162962 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfz9q\" (UniqueName: \"kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q\") pod \"dnsmasq-dns-76fcf4b695-csfht\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.163472 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.200698 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.219989 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.396497 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-pph48"] Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.445873 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dxfnd"] Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.544957 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" event={"ID":"cc54acef-dc5d-4a73-9283-1883c7e3314a","Type":"ContainerStarted","Data":"af4f9e4c6771c9c8c4e1d7fc24f77e8d4f9de1edb17673dcdd860703c48c2793"} Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.558399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-pph48" event={"ID":"84499f28-1908-4654-b0bc-a6961f49bb57","Type":"ContainerStarted","Data":"2b9135cd164bd4b97125d1157d621757691fd14490057a15d5892d39d0505a6a"} Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.663092 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.757785 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts\") pod \"d32a25c3-1275-463a-bfca-f7cac13c5048\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.757846 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z56kq\" (UniqueName: \"kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq\") pod \"d32a25c3-1275-463a-bfca-f7cac13c5048\" (UID: \"d32a25c3-1275-463a-bfca-f7cac13c5048\") " Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.759674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d32a25c3-1275-463a-bfca-f7cac13c5048" (UID: "d32a25c3-1275-463a-bfca-f7cac13c5048"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.767445 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d32a25c3-1275-463a-bfca-f7cac13c5048-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.790297 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq" (OuterVolumeSpecName: "kube-api-access-z56kq") pod "d32a25c3-1275-463a-bfca-f7cac13c5048" (UID: "d32a25c3-1275-463a-bfca-f7cac13c5048"). InnerVolumeSpecName "kube-api-access-z56kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.863046 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b89wr"] Feb 26 14:39:30 crc kubenswrapper[4809]: I0226 14:39:30.869454 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z56kq\" (UniqueName: \"kubernetes.io/projected/d32a25c3-1275-463a-bfca-f7cac13c5048-kube-api-access-z56kq\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.007738 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b7cnn"] Feb 26 14:39:31 crc kubenswrapper[4809]: W0226 14:39:31.029828 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06597a2e_41b4_4d56_bed1_0cb73516bee0.slice/crio-e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c WatchSource:0}: Error finding container e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c: Status 404 returned error can't find the container with id e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.404236 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g49c6"] Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.413027 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.474341 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sdgpk"] Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.541521 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.570348 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dxfnd" event={"ID":"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff","Type":"ContainerStarted","Data":"551e7693efb024ec687228472589d92cc9fcf3a79f04c7b10e17c29e372ebc42"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.572939 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerStarted","Data":"1b123f60866a41787f56c923cfb4d3c259e83006ab83c6bac1274f5ea628fde0"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.575826 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b89wr" event={"ID":"ddf13b0e-9265-48c1-830b-8f0e59578fcf","Type":"ContainerStarted","Data":"1ddd2a94dd4acb2e8e6c8b58655d8a4d7709823531b22848a9ee447134369702"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.577511 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerStarted","Data":"e83a20346388e28ccd118dcaaf82d6d94eb288781b3a2bdf034f5f49090ad153"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.578604 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sdgpk" event={"ID":"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8","Type":"ContainerStarted","Data":"588607bfb1b80c7487a606b9c5f8e11943223c39ba114f48f2c41ac7bec6959f"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.579807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49c6" event={"ID":"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9","Type":"ContainerStarted","Data":"942ef78944f133756c680aa4585f41585ca35eb7ef5eabdecc15131ba42acc9e"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.581521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ztvfb" event={"ID":"d32a25c3-1275-463a-bfca-f7cac13c5048","Type":"ContainerDied","Data":"74a71ef4c8ad23a0f6ebc7caace493dbc5c460091115cbc4495f0260f5c570b2"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.581549 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a71ef4c8ad23a0f6ebc7caace493dbc5c460091115cbc4495f0260f5c570b2" Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.581978 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ztvfb" Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.587170 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7cnn" event={"ID":"06597a2e-41b4-4d56-bed1-0cb73516bee0","Type":"ContainerStarted","Data":"e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c"} Feb 26 14:39:31 crc kubenswrapper[4809]: I0226 14:39:31.850684 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:31 crc kubenswrapper[4809]: > Feb 26 14:39:32 crc kubenswrapper[4809]: I0226 14:39:32.271326 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:39:32 crc kubenswrapper[4809]: I0226 14:39:32.478729 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddbbc3ad8-368d-42a5-ba41-2c89e8b0502a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddbbc3ad8-368d-42a5-ba41-2c89e8b0502a] : Timed out while waiting for systemd to remove kubepods-besteffort-poddbbc3ad8_368d_42a5_ba41_2c89e8b0502a.slice" Feb 26 14:39:32 crc kubenswrapper[4809]: E0226 14:39:32.478780 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poddbbc3ad8-368d-42a5-ba41-2c89e8b0502a] : unable to destroy cgroup paths for cgroup [kubepods besteffort poddbbc3ad8-368d-42a5-ba41-2c89e8b0502a] : Timed out while waiting for systemd to remove kubepods-besteffort-poddbbc3ad8_368d_42a5_ba41_2c89e8b0502a.slice" pod="openstack/cinder-87d4-account-create-update-zw2lg" podUID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" Feb 26 14:39:32 crc kubenswrapper[4809]: I0226 14:39:32.598521 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-87d4-account-create-update-zw2lg" Feb 26 14:39:33 crc kubenswrapper[4809]: I0226 14:39:33.610800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7cnn" event={"ID":"06597a2e-41b4-4d56-bed1-0cb73516bee0","Type":"ContainerStarted","Data":"a951219fdc2d9e5434d52ccc402f1c9691290b16f1d5fab63fe961e081b6e8d7"} Feb 26 14:39:33 crc kubenswrapper[4809]: I0226 14:39:33.612446 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" event={"ID":"cc54acef-dc5d-4a73-9283-1883c7e3314a","Type":"ContainerStarted","Data":"05466c25578daffba82e98898ad8b7dc1e2d822c71628f97728e154190d0bd72"} Feb 26 14:39:33 crc kubenswrapper[4809]: I0226 14:39:33.614758 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerStarted","Data":"4e3344f1a50d3b4df286abd52a5ffc94d18033e941a513014a78555520ebbf12"} Feb 26 14:39:33 crc kubenswrapper[4809]: I0226 14:39:33.616588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dxfnd" event={"ID":"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff","Type":"ContainerStarted","Data":"062b0915cb5928792785fca79342ace0567a8c18187b6b6faaf8a20741ed4e1e"} Feb 26 14:39:35 crc kubenswrapper[4809]: I0226 14:39:35.639256 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc54acef-dc5d-4a73-9283-1883c7e3314a" containerID="05466c25578daffba82e98898ad8b7dc1e2d822c71628f97728e154190d0bd72" exitCode=0 Feb 26 14:39:35 crc kubenswrapper[4809]: I0226 14:39:35.639416 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" event={"ID":"cc54acef-dc5d-4a73-9283-1883c7e3314a","Type":"ContainerDied","Data":"05466c25578daffba82e98898ad8b7dc1e2d822c71628f97728e154190d0bd72"} Feb 26 14:39:35 crc kubenswrapper[4809]: I0226 14:39:35.668131 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dxfnd" podStartSLOduration=7.66810117 podStartE2EDuration="7.66810117s" podCreationTimestamp="2026-02-26 14:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:35.654897875 +0000 UTC m=+1554.128218398" watchObservedRunningTime="2026-02-26 14:39:35.66810117 +0000 UTC m=+1554.141421693" Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.794376 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.796144 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.796291 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.797290 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.797693 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0" gracePeriod=600 Feb 26 14:39:41 crc kubenswrapper[4809]: I0226 14:39:41.876729 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:41 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:41 crc kubenswrapper[4809]: > Feb 26 14:39:48 crc kubenswrapper[4809]: I0226 14:39:48.832189 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b7cnn" podStartSLOduration=19.832173129 podStartE2EDuration="19.832173129s" podCreationTimestamp="2026-02-26 14:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:39:48.831078768 +0000 UTC m=+1567.304399291" watchObservedRunningTime="2026-02-26 14:39:48.832173129 +0000 UTC m=+1567.305493652" Feb 26 14:39:49 crc kubenswrapper[4809]: I0226 14:39:49.820348 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0" exitCode=0 Feb 26 14:39:49 crc kubenswrapper[4809]: I0226 14:39:49.820881 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0"} Feb 26 14:39:49 crc kubenswrapper[4809]: I0226 14:39:49.820918 4809 scope.go:117] "RemoveContainer" containerID="18387769c34c81dfd7e127e2cfc792d343ccf6a79a07a1676e4a9b7deb87f168" Feb 26 14:39:49 crc kubenswrapper[4809]: I0226 14:39:49.822521 4809 generic.go:334] "Generic (PLEG): container finished" podID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerID="4e3344f1a50d3b4df286abd52a5ffc94d18033e941a513014a78555520ebbf12" exitCode=0 Feb 26 14:39:49 crc kubenswrapper[4809]: I0226 14:39:49.822541 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerDied","Data":"4e3344f1a50d3b4df286abd52a5ffc94d18033e941a513014a78555520ebbf12"} Feb 26 14:39:51 crc kubenswrapper[4809]: I0226 14:39:51.813869 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:39:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:39:51 crc kubenswrapper[4809]: > Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.383573 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493\": context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.383781 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r92sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b89wr_openstack(ddf13b0e-9265-48c1-830b-8f0e59578fcf): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493\": context canceled" logger="UnhandledError" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.385301 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:85be161557f2e681766c21a716be31740f94c57bf1d600be08a739ca6f8e2493\\\": context canceled\"" pod="openstack/cinder-db-sync-b89wr" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.407777 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.407943 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5549,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-sdgpk_openstack(3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.409141 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-sdgpk" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.861880 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-sdgpk" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" Feb 26 14:39:52 crc kubenswrapper[4809]: E0226 14:39:52.862452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b89wr" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" Feb 26 14:39:53 crc kubenswrapper[4809]: I0226 14:39:53.872113 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dxfnd" event={"ID":"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff","Type":"ContainerDied","Data":"062b0915cb5928792785fca79342ace0567a8c18187b6b6faaf8a20741ed4e1e"} Feb 26 14:39:53 crc kubenswrapper[4809]: I0226 14:39:53.872150 4809 generic.go:334] "Generic (PLEG): container finished" podID="1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" containerID="062b0915cb5928792785fca79342ace0567a8c18187b6b6faaf8a20741ed4e1e" exitCode=0 Feb 26 14:39:57 crc kubenswrapper[4809]: E0226 14:39:57.468149 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 26 14:39:57 crc kubenswrapper[4809]: E0226 14:39:57.468864 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc6h5dfh584h58ch677h555h66bhb5hdbh5f4h64bh5b8h65bh77h677h7h4h684h687h56fh56bh84h589h57ch546h698h5bhb9h5ffh89h574h58bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mfzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(76b6769a-0dce-4f13-8b83-720ae328c81b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.593943 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689243 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689656 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp8s7\" (UniqueName: \"kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689840 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689873 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.689936 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0\") pod \"cc54acef-dc5d-4a73-9283-1883c7e3314a\" (UID: \"cc54acef-dc5d-4a73-9283-1883c7e3314a\") " Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.716562 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7" (OuterVolumeSpecName: "kube-api-access-zp8s7") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "kube-api-access-zp8s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.720509 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.723651 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config" (OuterVolumeSpecName: "config") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.725276 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.735580 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.739167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc54acef-dc5d-4a73-9283-1883c7e3314a" (UID: "cc54acef-dc5d-4a73-9283-1883c7e3314a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793306 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp8s7\" (UniqueName: \"kubernetes.io/projected/cc54acef-dc5d-4a73-9283-1883c7e3314a-kube-api-access-zp8s7\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793343 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793352 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793360 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793369 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.793377 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc54acef-dc5d-4a73-9283-1883c7e3314a-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.919290 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" event={"ID":"cc54acef-dc5d-4a73-9283-1883c7e3314a","Type":"ContainerDied","Data":"af4f9e4c6771c9c8c4e1d7fc24f77e8d4f9de1edb17673dcdd860703c48c2793"} Feb 26 14:39:57 crc kubenswrapper[4809]: I0226 14:39:57.919363 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ktq5c" Feb 26 14:39:58 crc kubenswrapper[4809]: I0226 14:39:58.032299 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:58 crc kubenswrapper[4809]: I0226 14:39:58.065234 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ktq5c"] Feb 26 14:39:58 crc kubenswrapper[4809]: I0226 14:39:58.272898 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc54acef-dc5d-4a73-9283-1883c7e3314a" path="/var/lib/kubelet/pods/cc54acef-dc5d-4a73-9283-1883c7e3314a/volumes" Feb 26 14:39:58 crc kubenswrapper[4809]: E0226 14:39:58.278177 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 26 14:39:58 crc kubenswrapper[4809]: E0226 14:39:58.278327 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g49c6_openstack(fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:39:58 crc kubenswrapper[4809]: E0226 14:39:58.279481 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g49c6" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" Feb 26 14:39:58 crc kubenswrapper[4809]: E0226 14:39:58.933384 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g49c6" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.145039 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535280-ghpjl"] Feb 26 14:40:00 crc kubenswrapper[4809]: E0226 14:40:00.145687 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32a25c3-1275-463a-bfca-f7cac13c5048" containerName="mariadb-account-create-update" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.145707 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32a25c3-1275-463a-bfca-f7cac13c5048" containerName="mariadb-account-create-update" Feb 26 14:40:00 crc kubenswrapper[4809]: E0226 14:40:00.145720 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc54acef-dc5d-4a73-9283-1883c7e3314a" containerName="init" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.145727 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc54acef-dc5d-4a73-9283-1883c7e3314a" containerName="init" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.146038 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32a25c3-1275-463a-bfca-f7cac13c5048" containerName="mariadb-account-create-update" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.146064 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc54acef-dc5d-4a73-9283-1883c7e3314a" containerName="init" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.147041 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.149140 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.153328 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.155487 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.160581 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-ghpjl"] Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.263138 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhgx\" (UniqueName: \"kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx\") pod \"auto-csr-approver-29535280-ghpjl\" (UID: \"1afcfb36-d52b-43b1-9abc-59e0242c83f1\") " pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.366967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhgx\" (UniqueName: \"kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx\") pod \"auto-csr-approver-29535280-ghpjl\" (UID: \"1afcfb36-d52b-43b1-9abc-59e0242c83f1\") " pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.397924 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhgx\" (UniqueName: \"kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx\") pod \"auto-csr-approver-29535280-ghpjl\" (UID: \"1afcfb36-d52b-43b1-9abc-59e0242c83f1\") " pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:00 crc kubenswrapper[4809]: I0226 14:40:00.476108 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:01 crc kubenswrapper[4809]: I0226 14:40:01.803243 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:01 crc kubenswrapper[4809]: > Feb 26 14:40:02 crc kubenswrapper[4809]: E0226 14:40:02.581468 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3512525135/2\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 26 14:40:02 crc kubenswrapper[4809]: E0226 14:40:02.581910 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vjmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-pph48_openstack(84499f28-1908-4654-b0bc-a6961f49bb57): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3512525135/2\": happened during read: context canceled" logger="UnhandledError" Feb 26 14:40:02 crc kubenswrapper[4809]: E0226 14:40:02.583576 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3512525135/2\\\": happened during read: context canceled\"" pod="openstack/heat-db-sync-pph48" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" Feb 26 14:40:02 crc kubenswrapper[4809]: I0226 14:40:02.645322 4809 scope.go:117] "RemoveContainer" containerID="05466c25578daffba82e98898ad8b7dc1e2d822c71628f97728e154190d0bd72" Feb 26 14:40:02 crc kubenswrapper[4809]: I0226 14:40:02.889800 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:40:02 crc kubenswrapper[4809]: I0226 14:40:02.981775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dxfnd" event={"ID":"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff","Type":"ContainerDied","Data":"551e7693efb024ec687228472589d92cc9fcf3a79f04c7b10e17c29e372ebc42"} Feb 26 14:40:02 crc kubenswrapper[4809]: I0226 14:40:02.982330 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="551e7693efb024ec687228472589d92cc9fcf3a79f04c7b10e17c29e372ebc42" Feb 26 14:40:02 crc kubenswrapper[4809]: I0226 14:40:02.981835 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dxfnd" Feb 26 14:40:02 crc kubenswrapper[4809]: E0226 14:40:02.983974 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-pph48" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.031418 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.031777 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.032064 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.033187 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.033280 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56df\" (UniqueName: \"kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.033393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys\") pod \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\" (UID: \"1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff\") " Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.040810 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts" (OuterVolumeSpecName: "scripts") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.041126 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.042371 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.042566 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df" (OuterVolumeSpecName: "kube-api-access-r56df") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "kube-api-access-r56df". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.064199 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data" (OuterVolumeSpecName: "config-data") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.067262 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" (UID: "1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136769 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136808 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56df\" (UniqueName: \"kubernetes.io/projected/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-kube-api-access-r56df\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136821 4809 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136832 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136852 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.136865 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.187670 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-ghpjl"] Feb 26 14:40:03 crc kubenswrapper[4809]: W0226 14:40:03.485302 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1afcfb36_d52b_43b1_9abc_59e0242c83f1.slice/crio-6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497 WatchSource:0}: Error finding container 6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497: Status 404 returned error can't find the container with id 6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497 Feb 26 14:40:03 crc kubenswrapper[4809]: I0226 14:40:03.997848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" event={"ID":"1afcfb36-d52b-43b1-9abc-59e0242c83f1","Type":"ContainerStarted","Data":"6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497"} Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.009291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155"} Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.017062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerStarted","Data":"43e9f84d9a5f8a7da1aadc798f85d0198a07606d11436546c47208d39f37b263"} Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.017531 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.039713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerStarted","Data":"ded58b525fad1eebc13454b529958aee7717ff5bad17b1e87758ea615c638b5c"} Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.182444 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dxfnd"] Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.377693 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dxfnd"] Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.461212 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podStartSLOduration=35.461189172 podStartE2EDuration="35.461189172s" podCreationTimestamp="2026-02-26 14:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:04.173684729 +0000 UTC m=+1582.647005252" watchObservedRunningTime="2026-02-26 14:40:04.461189172 +0000 UTC m=+1582.934509695" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.475760 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rhm7x"] Feb 26 14:40:04 crc kubenswrapper[4809]: E0226 14:40:04.476400 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" containerName="keystone-bootstrap" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.476428 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" containerName="keystone-bootstrap" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.476712 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" containerName="keystone-bootstrap" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.477521 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.483125 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wtrbf" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.483482 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.483632 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.483139 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.486581 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.488766 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rhm7x"] Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588121 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd7qv\" (UniqueName: \"kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588390 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.588507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690383 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd7qv\" (UniqueName: \"kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.690967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.698093 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.698895 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.699955 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.704489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.712449 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.713280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd7qv\" (UniqueName: \"kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv\") pod \"keystone-bootstrap-rhm7x\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:04 crc kubenswrapper[4809]: I0226 14:40:04.805305 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:05 crc kubenswrapper[4809]: I0226 14:40:05.341007 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rhm7x"] Feb 26 14:40:05 crc kubenswrapper[4809]: W0226 14:40:05.350626 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703ff5d0_61b5_407c_b4de_b163668a8851.slice/crio-fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2 WatchSource:0}: Error finding container fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2: Status 404 returned error can't find the container with id fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2 Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.070390 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhm7x" event={"ID":"703ff5d0-61b5-407c-b4de-b163668a8851","Type":"ContainerStarted","Data":"e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9"} Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.070747 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhm7x" event={"ID":"703ff5d0-61b5-407c-b4de-b163668a8851","Type":"ContainerStarted","Data":"fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2"} Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.075074 4809 generic.go:334] "Generic (PLEG): container finished" podID="1afcfb36-d52b-43b1-9abc-59e0242c83f1" containerID="09f6260937bad6e5eef607f4499c70afded060ec8faf07e3ddeafd4f431a2ac6" exitCode=0 Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.075148 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" event={"ID":"1afcfb36-d52b-43b1-9abc-59e0242c83f1","Type":"ContainerDied","Data":"09f6260937bad6e5eef607f4499c70afded060ec8faf07e3ddeafd4f431a2ac6"} Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.078668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sdgpk" event={"ID":"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8","Type":"ContainerStarted","Data":"b2b000b45403b605c4810921b54d45452882eaa554acbc09360d4daab27ef554"} Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.108910 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rhm7x" podStartSLOduration=2.108891118 podStartE2EDuration="2.108891118s" podCreationTimestamp="2026-02-26 14:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:06.085450882 +0000 UTC m=+1584.558771425" watchObservedRunningTime="2026-02-26 14:40:06.108891118 +0000 UTC m=+1584.582211641" Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.130699 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-sdgpk" podStartSLOduration=3.850978877 podStartE2EDuration="37.130677386s" podCreationTimestamp="2026-02-26 14:39:29 +0000 UTC" firstStartedPulling="2026-02-26 14:39:31.477245641 +0000 UTC m=+1549.950566154" lastFinishedPulling="2026-02-26 14:40:04.75694414 +0000 UTC m=+1583.230264663" observedRunningTime="2026-02-26 14:40:06.129463262 +0000 UTC m=+1584.602783795" watchObservedRunningTime="2026-02-26 14:40:06.130677386 +0000 UTC m=+1584.603997909" Feb 26 14:40:06 crc kubenswrapper[4809]: I0226 14:40:06.287666 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff" path="/var/lib/kubelet/pods/1b7d0bc1-f55c-4425-ad3b-8d9c64e8d7ff/volumes" Feb 26 14:40:09 crc kubenswrapper[4809]: I0226 14:40:09.128931 4809 generic.go:334] "Generic (PLEG): container finished" podID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" containerID="b2b000b45403b605c4810921b54d45452882eaa554acbc09360d4daab27ef554" exitCode=0 Feb 26 14:40:09 crc kubenswrapper[4809]: I0226 14:40:09.129049 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sdgpk" event={"ID":"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8","Type":"ContainerDied","Data":"b2b000b45403b605c4810921b54d45452882eaa554acbc09360d4daab27ef554"} Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.140732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" event={"ID":"1afcfb36-d52b-43b1-9abc-59e0242c83f1","Type":"ContainerDied","Data":"6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497"} Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.140780 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd1c34b4dbca302d7787385c67aeae53ba221d435dabee769ea5db7c23ef497" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.142131 4809 generic.go:334] "Generic (PLEG): container finished" podID="8cac2949-71b1-417b-b184-e890f4a309ad" containerID="0801b838d74337a799a33972e83e803fd41c03dac85dc4c693a1cb6db903f81d" exitCode=0 Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.142218 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x42ls" event={"ID":"8cac2949-71b1-417b-b184-e890f4a309ad","Type":"ContainerDied","Data":"0801b838d74337a799a33972e83e803fd41c03dac85dc4c693a1cb6db903f81d"} Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.203212 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.378868 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.379111 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="dnsmasq-dns" containerID="cri-o://2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909" gracePeriod=10 Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.401348 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.577747 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzhgx\" (UniqueName: \"kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx\") pod \"1afcfb36-d52b-43b1-9abc-59e0242c83f1\" (UID: \"1afcfb36-d52b-43b1-9abc-59e0242c83f1\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.597484 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx" (OuterVolumeSpecName: "kube-api-access-jzhgx") pod "1afcfb36-d52b-43b1-9abc-59e0242c83f1" (UID: "1afcfb36-d52b-43b1-9abc-59e0242c83f1"). InnerVolumeSpecName "kube-api-access-jzhgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.680999 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzhgx\" (UniqueName: \"kubernetes.io/projected/1afcfb36-d52b-43b1-9abc-59e0242c83f1-kube-api-access-jzhgx\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.702119 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sdgpk" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.884924 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle\") pod \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.885356 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts\") pod \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.885466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs\") pod \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.885502 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5549\" (UniqueName: \"kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549\") pod \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.885709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data\") pod \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\" (UID: \"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.886295 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs" (OuterVolumeSpecName: "logs") pod "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" (UID: "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.889107 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.889967 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts" (OuterVolumeSpecName: "scripts") pod "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" (UID: "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.892154 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549" (OuterVolumeSpecName: "kube-api-access-n5549") pod "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" (UID: "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8"). InnerVolumeSpecName "kube-api-access-n5549". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.896246 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.929590 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data" (OuterVolumeSpecName: "config-data") pod "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" (UID: "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.932091 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" (UID: "3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.989576 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.989643 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.989676 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.989844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtbkz\" (UniqueName: \"kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.989951 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.990004 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc\") pod \"91595d30-de54-4cf9-947a-1e9e1b8c411b\" (UID: \"91595d30-de54-4cf9-947a-1e9e1b8c411b\") " Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.990483 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.990501 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.991487 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:10 crc kubenswrapper[4809]: I0226 14:40:10.991506 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5549\" (UniqueName: \"kubernetes.io/projected/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8-kube-api-access-n5549\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.001254 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz" (OuterVolumeSpecName: "kube-api-access-gtbkz") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "kube-api-access-gtbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.075432 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.075616 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.079210 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config" (OuterVolumeSpecName: "config") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.083670 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.094618 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtbkz\" (UniqueName: \"kubernetes.io/projected/91595d30-de54-4cf9-947a-1e9e1b8c411b-kube-api-access-gtbkz\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.094656 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.094669 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.094682 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.094692 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.097555 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91595d30-de54-4cf9-947a-1e9e1b8c411b" (UID: "91595d30-de54-4cf9-947a-1e9e1b8c411b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.197783 4809 generic.go:334] "Generic (PLEG): container finished" podID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerID="2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909" exitCode=0 Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.197844 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.197852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" event={"ID":"91595d30-de54-4cf9-947a-1e9e1b8c411b","Type":"ContainerDied","Data":"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909"} Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.197877 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-9b26n" event={"ID":"91595d30-de54-4cf9-947a-1e9e1b8c411b","Type":"ContainerDied","Data":"9fa08f83e0682a5f52043cb3ad3f260a3a53c4a50db20fb0f1e69b07443989f7"} Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.197891 4809 scope.go:117] "RemoveContainer" containerID="2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.200153 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91595d30-de54-4cf9-947a-1e9e1b8c411b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.212788 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sdgpk" event={"ID":"3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8","Type":"ContainerDied","Data":"588607bfb1b80c7487a606b9c5f8e11943223c39ba114f48f2c41ac7bec6959f"} Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.212854 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588607bfb1b80c7487a606b9c5f8e11943223c39ba114f48f2c41ac7bec6959f" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.212956 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sdgpk" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.223272 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535280-ghpjl" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.224449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerStarted","Data":"c93288e727859bd3af156a0e8d95f76eab39db2f557c98bd8c8fc7da5af1dd90"} Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.258411 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.286963 4809 scope.go:117] "RemoveContainer" containerID="63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.288532 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-9b26n"] Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.307709 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.308145 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="init" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.308164 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="init" Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.308181 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" containerName="placement-db-sync" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.308189 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" containerName="placement-db-sync" Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.308208 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afcfb36-d52b-43b1-9abc-59e0242c83f1" containerName="oc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.308214 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afcfb36-d52b-43b1-9abc-59e0242c83f1" containerName="oc" Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.308237 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="dnsmasq-dns" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.308243 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="dnsmasq-dns" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.309094 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afcfb36-d52b-43b1-9abc-59e0242c83f1" containerName="oc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.309112 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" containerName="dnsmasq-dns" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.309122 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" containerName="placement-db-sync" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.310535 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.314138 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.314414 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.315057 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.315425 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kll8r" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.319039 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.325813 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.368505 4809 scope.go:117] "RemoveContainer" containerID="2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909" Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.369221 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909\": container with ID starting with 2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909 not found: ID does not exist" containerID="2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.369255 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909"} err="failed to get container status \"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909\": rpc error: code = NotFound desc = could not find container \"2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909\": container with ID starting with 2746cd503047a0a8bd23a5230ccc621c39e383f60038849f5db7388ba4559909 not found: ID does not exist" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.369276 4809 scope.go:117] "RemoveContainer" containerID="63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d" Feb 26 14:40:11 crc kubenswrapper[4809]: E0226 14:40:11.369532 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d\": container with ID starting with 63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d not found: ID does not exist" containerID="63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.369560 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d"} err="failed to get container status \"63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d\": rpc error: code = NotFound desc = could not find container \"63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d\": container with ID starting with 63d6453aa53e252791d02da7da8c81dfc7ea60f199eefb5b9ea80d41f4a8a99d not found: ID does not exist" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.475941 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-k2m6c"] Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.485195 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535274-k2m6c"] Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.506855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.506920 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.507007 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.507127 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.507163 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.507205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.507234 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611117 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611187 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611273 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611486 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.611616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.617404 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.621389 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.623639 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.627487 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.630328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.634475 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.660689 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs\") pod \"placement-595b9697cb-h9llc\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.686638 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.834265 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:11 crc kubenswrapper[4809]: > Feb 26 14:40:11 crc kubenswrapper[4809]: I0226 14:40:11.912252 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x42ls" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.020832 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data\") pod \"8cac2949-71b1-417b-b184-e890f4a309ad\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.020911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data\") pod \"8cac2949-71b1-417b-b184-e890f4a309ad\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.021228 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle\") pod \"8cac2949-71b1-417b-b184-e890f4a309ad\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.021262 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8q27\" (UniqueName: \"kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27\") pod \"8cac2949-71b1-417b-b184-e890f4a309ad\" (UID: \"8cac2949-71b1-417b-b184-e890f4a309ad\") " Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.030666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8cac2949-71b1-417b-b184-e890f4a309ad" (UID: "8cac2949-71b1-417b-b184-e890f4a309ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.031354 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27" (OuterVolumeSpecName: "kube-api-access-d8q27") pod "8cac2949-71b1-417b-b184-e890f4a309ad" (UID: "8cac2949-71b1-417b-b184-e890f4a309ad"). InnerVolumeSpecName "kube-api-access-d8q27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:12 crc kubenswrapper[4809]: E0226 14:40:12.034918 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703ff5d0_61b5_407c_b4de_b163668a8851.slice/crio-conmon-e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703ff5d0_61b5_407c_b4de_b163668a8851.slice/crio-e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9.scope\": RecentStats: unable to find data in memory cache]" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.083728 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cac2949-71b1-417b-b184-e890f4a309ad" (UID: "8cac2949-71b1-417b-b184-e890f4a309ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.098044 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data" (OuterVolumeSpecName: "config-data") pod "8cac2949-71b1-417b-b184-e890f4a309ad" (UID: "8cac2949-71b1-417b-b184-e890f4a309ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.123657 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.123692 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8q27\" (UniqueName: \"kubernetes.io/projected/8cac2949-71b1-417b-b184-e890f4a309ad-kube-api-access-d8q27\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.123705 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.123715 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cac2949-71b1-417b-b184-e890f4a309ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.195962 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.270712 4809 generic.go:334] "Generic (PLEG): container finished" podID="703ff5d0-61b5-407c-b4de-b163668a8851" containerID="e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9" exitCode=0 Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.285624 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x42ls" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.285706 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e044d4b-4f62-464c-b887-005d79ce073c" path="/var/lib/kubelet/pods/8e044d4b-4f62-464c-b887-005d79ce073c/volumes" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.286468 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91595d30-de54-4cf9-947a-1e9e1b8c411b" path="/var/lib/kubelet/pods/91595d30-de54-4cf9-947a-1e9e1b8c411b/volumes" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.290127 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerStarted","Data":"321a6c005b3fb4566c8bf7fadc5904207eda78372bc0aa4b74e06c4441d0677f"} Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.290158 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhm7x" event={"ID":"703ff5d0-61b5-407c-b4de-b163668a8851","Type":"ContainerDied","Data":"e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9"} Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.290173 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x42ls" event={"ID":"8cac2949-71b1-417b-b184-e890f4a309ad","Type":"ContainerDied","Data":"07aff75bf4616ea1b4c64f9a15894b3b672d872726fef920256108b667604ade"} Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.290184 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07aff75bf4616ea1b4c64f9a15894b3b672d872726fef920256108b667604ade" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.824499 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:40:12 crc kubenswrapper[4809]: E0226 14:40:12.825164 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cac2949-71b1-417b-b184-e890f4a309ad" containerName="glance-db-sync" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.825176 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cac2949-71b1-417b-b184-e890f4a309ad" containerName="glance-db-sync" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.825460 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cac2949-71b1-417b-b184-e890f4a309ad" containerName="glance-db-sync" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.826660 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.866220 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.970865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.970939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.971135 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.971208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.971240 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwp2\" (UniqueName: \"kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:12 crc kubenswrapper[4809]: I0226 14:40:12.971328 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.072852 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.072923 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.072943 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwp2\" (UniqueName: \"kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.072993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.073079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.073106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.073799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.073821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.074827 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.074985 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.075070 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.185605 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwp2\" (UniqueName: \"kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2\") pod \"dnsmasq-dns-8b5c85b87-ld5zt\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.314421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49c6" event={"ID":"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9","Type":"ContainerStarted","Data":"f833bb0d99a2999ea15062758a6c24644e0775b1b30bb0681a40b8d788567bc0"} Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.339515 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerStarted","Data":"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4"} Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.359429 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g49c6" podStartSLOduration=3.558361939 podStartE2EDuration="44.359400144s" podCreationTimestamp="2026-02-26 14:39:29 +0000 UTC" firstStartedPulling="2026-02-26 14:39:31.417905787 +0000 UTC m=+1549.891226310" lastFinishedPulling="2026-02-26 14:40:12.218943982 +0000 UTC m=+1590.692264515" observedRunningTime="2026-02-26 14:40:13.336151314 +0000 UTC m=+1591.809471837" watchObservedRunningTime="2026-02-26 14:40:13.359400144 +0000 UTC m=+1591.832720667" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.471689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.875800 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.882954 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:13 crc kubenswrapper[4809]: E0226 14:40:13.883520 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703ff5d0-61b5-407c-b4de-b163668a8851" containerName="keystone-bootstrap" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.883538 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="703ff5d0-61b5-407c-b4de-b163668a8851" containerName="keystone-bootstrap" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.883867 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="703ff5d0-61b5-407c-b4de-b163668a8851" containerName="keystone-bootstrap" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.885425 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.889581 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.889816 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4x9gd" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.890863 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 14:40:13 crc kubenswrapper[4809]: I0226 14:40:13.908596 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.012623 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.015242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.015346 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.015465 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.015524 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.015657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd7qv\" (UniqueName: \"kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv\") pod \"703ff5d0-61b5-407c-b4de-b163668a8851\" (UID: \"703ff5d0-61b5-407c-b4de-b163668a8851\") " Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016089 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016309 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4t26\" (UniqueName: \"kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016372 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016468 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016535 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.016669 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.021047 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.023898 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.032342 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.035622 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts" (OuterVolumeSpecName: "scripts") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.050359 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv" (OuterVolumeSpecName: "kube-api-access-sd7qv") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "kube-api-access-sd7qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.052173 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.058873 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.092466 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.097138 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data" (OuterVolumeSpecName: "config-data") pod "703ff5d0-61b5-407c-b4de-b163668a8851" (UID: "703ff5d0-61b5-407c-b4de-b163668a8851"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.118765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxc6c\" (UniqueName: \"kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.118826 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.118866 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.118897 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.118995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119066 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4t26\" (UniqueName: \"kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119117 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119136 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119197 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119347 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd7qv\" (UniqueName: \"kubernetes.io/projected/703ff5d0-61b5-407c-b4de-b163668a8851-kube-api-access-sd7qv\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119359 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119368 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119378 4809 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119386 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.119418 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703ff5d0-61b5-407c-b4de-b163668a8851-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.120125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.136170 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.141270 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.143487 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.143530 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5430fcd2916c7b014ac0286e22544a8396394be2a5cb5110057444f013914bf0/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.153898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.169361 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.170881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4t26\" (UniqueName: \"kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.220903 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221138 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxc6c\" (UniqueName: \"kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221179 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221221 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221305 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221371 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.221629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.226731 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.227905 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.227948 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/290bcfa2d95fcff9bcdffee07cdf41f807340e8945582a9224b291984718a620/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.236833 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.246199 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.246771 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxc6c\" (UniqueName: \"kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.259427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.376536 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.401463 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhm7x" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.402453 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhm7x" event={"ID":"703ff5d0-61b5-407c-b4de-b163668a8851","Type":"ContainerDied","Data":"fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2"} Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.402505 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7e4935ce7488f703a8492e4d6a22f58cdc5cd61cc10d6e183e9c331c43e0e2" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.402465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.424916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerStarted","Data":"5e2f054c3eb4d04c9dac12551ea0a5e22392a1aca243ac3941c254a424424bf1"} Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.443065 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerStarted","Data":"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48"} Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.443350 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.443583 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.470891 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-595b9697cb-h9llc" podStartSLOduration=3.470866264 podStartE2EDuration="3.470866264s" podCreationTimestamp="2026-02-26 14:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:14.463787253 +0000 UTC m=+1592.937107776" watchObservedRunningTime="2026-02-26 14:40:14.470866264 +0000 UTC m=+1592.944186797" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.535276 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.539336 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b67cbb9bb-wjj2d"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.540698 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.545364 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.546049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.546179 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.546294 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wtrbf" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.546408 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.547971 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.566524 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b67cbb9bb-wjj2d"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637127 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-public-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637437 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-scripts\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-credential-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637496 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cn7h\" (UniqueName: \"kubernetes.io/projected/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-kube-api-access-5cn7h\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637549 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-combined-ca-bundle\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637571 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-config-data\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637618 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-fernet-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.637642 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-internal-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.687965 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740220 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-combined-ca-bundle\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-config-data\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740338 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-fernet-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-internal-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740426 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-public-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-scripts\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740539 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-credential-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.740565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cn7h\" (UniqueName: \"kubernetes.io/projected/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-kube-api-access-5cn7h\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.749380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-credential-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.749706 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-scripts\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.750042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-internal-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.755036 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-public-tls-certs\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.760307 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-config-data\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.760819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-fernet-keys\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.764083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-combined-ca-bundle\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.776839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cn7h\" (UniqueName: \"kubernetes.io/projected/44ac06c8-98f3-478a-bfca-6eca9c2fc66b-kube-api-access-5cn7h\") pod \"keystone-5b67cbb9bb-wjj2d\" (UID: \"44ac06c8-98f3-478a-bfca-6eca9c2fc66b\") " pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.867687 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c65dd4586-db9jl"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.871069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.898586 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.939097 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c65dd4586-db9jl"] Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.946834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c81db6-7044-4d85-9680-bb4744af4cba-logs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.946885 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-combined-ca-bundle\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.946939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-config-data\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.947417 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-scripts\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.947541 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tsg\" (UniqueName: \"kubernetes.io/projected/79c81db6-7044-4d85-9680-bb4744af4cba-kube-api-access-99tsg\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.947607 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-internal-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:14 crc kubenswrapper[4809]: I0226 14:40:14.947720 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-public-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.049832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tsg\" (UniqueName: \"kubernetes.io/projected/79c81db6-7044-4d85-9680-bb4744af4cba-kube-api-access-99tsg\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.049899 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-internal-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.049949 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-public-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.050001 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c81db6-7044-4d85-9680-bb4744af4cba-logs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.050041 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-combined-ca-bundle\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.050084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-config-data\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.050162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-scripts\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.054257 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c81db6-7044-4d85-9680-bb4744af4cba-logs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.063299 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-public-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.063342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-combined-ca-bundle\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.064412 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-scripts\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.066346 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-config-data\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.074440 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79c81db6-7044-4d85-9680-bb4744af4cba-internal-tls-certs\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.139893 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tsg\" (UniqueName: \"kubernetes.io/projected/79c81db6-7044-4d85-9680-bb4744af4cba-kube-api-access-99tsg\") pod \"placement-6c65dd4586-db9jl\" (UID: \"79c81db6-7044-4d85-9680-bb4744af4cba\") " pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.219905 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.343118 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.490860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerStarted","Data":"9b40dbb10d9794c95ad44582665290af05af0154d7f4bb27e6681617d397d82e"} Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.496658 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerStarted","Data":"2d50f7dc9eba0a31d32a6c7ef969a0fc24977f1bf13a005ab549b08bb637db3f"} Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.586576 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.620646 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b67cbb9bb-wjj2d"] Feb 26 14:40:15 crc kubenswrapper[4809]: W0226 14:40:15.632261 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44ac06c8_98f3_478a_bfca_6eca9c2fc66b.slice/crio-5921f5f8bc68d8f0add95cfc73f124bf3b9f4c1a17225f107cf353667663c7cc WatchSource:0}: Error finding container 5921f5f8bc68d8f0add95cfc73f124bf3b9f4c1a17225f107cf353667663c7cc: Status 404 returned error can't find the container with id 5921f5f8bc68d8f0add95cfc73f124bf3b9f4c1a17225f107cf353667663c7cc Feb 26 14:40:15 crc kubenswrapper[4809]: I0226 14:40:15.802767 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c65dd4586-db9jl"] Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.510574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b67cbb9bb-wjj2d" event={"ID":"44ac06c8-98f3-478a-bfca-6eca9c2fc66b","Type":"ContainerStarted","Data":"1875cd28ab21f5bacc6c6c5c1c296df88403ac1a4382eb40936af6899674ba35"} Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.510916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b67cbb9bb-wjj2d" event={"ID":"44ac06c8-98f3-478a-bfca-6eca9c2fc66b","Type":"ContainerStarted","Data":"5921f5f8bc68d8f0add95cfc73f124bf3b9f4c1a17225f107cf353667663c7cc"} Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.512649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerStarted","Data":"3c3999c589595e7f306ddd3db69d1a32dffe405ccfba16ff7d9eb14b83b005c6"} Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.515813 4809 generic.go:334] "Generic (PLEG): container finished" podID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerID="9b40dbb10d9794c95ad44582665290af05af0154d7f4bb27e6681617d397d82e" exitCode=0 Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.515889 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerDied","Data":"9b40dbb10d9794c95ad44582665290af05af0154d7f4bb27e6681617d397d82e"} Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.532521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c65dd4586-db9jl" event={"ID":"79c81db6-7044-4d85-9680-bb4744af4cba","Type":"ContainerStarted","Data":"2ff4eb723dc4174d4407c8d1afe6662ea887089ebae0c935e6605d650a5b99f3"} Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.653872 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:16 crc kubenswrapper[4809]: I0226 14:40:16.740267 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:17 crc kubenswrapper[4809]: I0226 14:40:17.578006 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c65dd4586-db9jl" event={"ID":"79c81db6-7044-4d85-9680-bb4744af4cba","Type":"ContainerStarted","Data":"7ac740e9b4b507b8089ceb1d7af9c311b39f7b9c6ef1a82a246e2380021a42c2"} Feb 26 14:40:17 crc kubenswrapper[4809]: I0226 14:40:17.584260 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerStarted","Data":"f1db9d76b13851dbce7f632b7ecfc2eb02506f8c9be60a55ebb8eccfdc735fc8"} Feb 26 14:40:17 crc kubenswrapper[4809]: I0226 14:40:17.588191 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerStarted","Data":"7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9"} Feb 26 14:40:17 crc kubenswrapper[4809]: I0226 14:40:17.588257 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:17 crc kubenswrapper[4809]: I0226 14:40:17.618840 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b67cbb9bb-wjj2d" podStartSLOduration=3.61882256 podStartE2EDuration="3.61882256s" podCreationTimestamp="2026-02-26 14:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:17.610174484 +0000 UTC m=+1596.083495007" watchObservedRunningTime="2026-02-26 14:40:17.61882256 +0000 UTC m=+1596.092143083" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.599169 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerStarted","Data":"add3e7537c53543ef50cb101648b46ec2d6a72abc2167b22f50699db40d9700a"} Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.599231 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-log" containerID="cri-o://7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9" gracePeriod=30 Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.599314 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-httpd" containerID="cri-o://add3e7537c53543ef50cb101648b46ec2d6a72abc2167b22f50699db40d9700a" gracePeriod=30 Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.603084 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c65dd4586-db9jl" event={"ID":"79c81db6-7044-4d85-9680-bb4744af4cba","Type":"ContainerStarted","Data":"723e085dffceb18f4c7d54cb7b370373011ff71fc196b78e4fb3734b9257677f"} Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.603242 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.603306 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.607876 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerStarted","Data":"1f76858d44bdc4cef403b225a50eac58dfe50327d5dca64496fc41e0fab0d2e7"} Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.611530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerStarted","Data":"e5a8823a60e155be5c07f15d753fb5e333ae7bbe38a8557b31b08728c775240c"} Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.611689 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.634062 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.634038067 podStartE2EDuration="6.634038067s" podCreationTimestamp="2026-02-26 14:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:18.621766618 +0000 UTC m=+1597.095087151" watchObservedRunningTime="2026-02-26 14:40:18.634038067 +0000 UTC m=+1597.107358600" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.664990 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" podStartSLOduration=6.664970545 podStartE2EDuration="6.664970545s" podCreationTimestamp="2026-02-26 14:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:18.645368588 +0000 UTC m=+1597.118689121" watchObservedRunningTime="2026-02-26 14:40:18.664970545 +0000 UTC m=+1597.138291068" Feb 26 14:40:18 crc kubenswrapper[4809]: I0226 14:40:18.679651 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c65dd4586-db9jl" podStartSLOduration=4.679629881 podStartE2EDuration="4.679629881s" podCreationTimestamp="2026-02-26 14:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:18.675949317 +0000 UTC m=+1597.149269840" watchObservedRunningTime="2026-02-26 14:40:18.679629881 +0000 UTC m=+1597.152950394" Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625268 4809 generic.go:334] "Generic (PLEG): container finished" podID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerID="add3e7537c53543ef50cb101648b46ec2d6a72abc2167b22f50699db40d9700a" exitCode=143 Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625749 4809 generic.go:334] "Generic (PLEG): container finished" podID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerID="7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9" exitCode=143 Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerDied","Data":"add3e7537c53543ef50cb101648b46ec2d6a72abc2167b22f50699db40d9700a"} Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625875 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerDied","Data":"7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9"} Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625885 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-log" containerID="cri-o://f1db9d76b13851dbce7f632b7ecfc2eb02506f8c9be60a55ebb8eccfdc735fc8" gracePeriod=30 Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.625934 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-httpd" containerID="cri-o://1f76858d44bdc4cef403b225a50eac58dfe50327d5dca64496fc41e0fab0d2e7" gracePeriod=30 Feb 26 14:40:19 crc kubenswrapper[4809]: I0226 14:40:19.654663 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.654639967 podStartE2EDuration="7.654639967s" podCreationTimestamp="2026-02-26 14:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:19.646344881 +0000 UTC m=+1598.119665404" watchObservedRunningTime="2026-02-26 14:40:19.654639967 +0000 UTC m=+1598.127960500" Feb 26 14:40:20 crc kubenswrapper[4809]: I0226 14:40:20.641958 4809 generic.go:334] "Generic (PLEG): container finished" podID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerID="f1db9d76b13851dbce7f632b7ecfc2eb02506f8c9be60a55ebb8eccfdc735fc8" exitCode=143 Feb 26 14:40:20 crc kubenswrapper[4809]: I0226 14:40:20.642162 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerDied","Data":"f1db9d76b13851dbce7f632b7ecfc2eb02506f8c9be60a55ebb8eccfdc735fc8"} Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.662174 4809 generic.go:334] "Generic (PLEG): container finished" podID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerID="1f76858d44bdc4cef403b225a50eac58dfe50327d5dca64496fc41e0fab0d2e7" exitCode=143 Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.662251 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerDied","Data":"1f76858d44bdc4cef403b225a50eac58dfe50327d5dca64496fc41e0fab0d2e7"} Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.837362 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:21 crc kubenswrapper[4809]: > Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.837455 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.838559 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d"} pod="openshift-marketplace/redhat-operators-lkxlc" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.838612 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" containerID="cri-o://cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d" gracePeriod=30 Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.977199 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:21 crc kubenswrapper[4809]: I0226 14:40:21.983874 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061482 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061569 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061642 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061665 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxc6c\" (UniqueName: \"kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061895 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061923 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.061971 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.062049 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.062214 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.062854 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.063075 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs" (OuterVolumeSpecName: "logs") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.063151 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.063863 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs" (OuterVolumeSpecName: "logs") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.069402 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c" (OuterVolumeSpecName: "kube-api-access-mxc6c") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "kube-api-access-mxc6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.071195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts" (OuterVolumeSpecName: "scripts") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.071742 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts" (OuterVolumeSpecName: "scripts") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.089180 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24" (OuterVolumeSpecName: "glance") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "pvc-6413f57d-8568-4541-9777-75a4b04caf24". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.118985 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.151291 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.159058 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data" (OuterVolumeSpecName: "config-data") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.161575 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data" (OuterVolumeSpecName: "config-data") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.164814 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run\") pod \"826492f3-d5ea-49fd-940a-806eb1c1004d\" (UID: \"826492f3-d5ea-49fd-940a-806eb1c1004d\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.164960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4t26\" (UniqueName: \"kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.165160 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "826492f3-d5ea-49fd-940a-806eb1c1004d" (UID: "826492f3-d5ea-49fd-940a-806eb1c1004d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.165202 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"f065d940-b4ed-430f-8cbb-53e966de69f8\" (UID: \"f065d940-b4ed-430f-8cbb-53e966de69f8\") " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.165967 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.165986 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.165997 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxc6c\" (UniqueName: \"kubernetes.io/projected/826492f3-d5ea-49fd-940a-806eb1c1004d-kube-api-access-mxc6c\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166007 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f065d940-b4ed-430f-8cbb-53e966de69f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166149 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166158 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166167 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826492f3-d5ea-49fd-940a-806eb1c1004d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166187 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") on node \"crc\" " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166197 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166206 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f065d940-b4ed-430f-8cbb-53e966de69f8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.166214 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826492f3-d5ea-49fd-940a-806eb1c1004d-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.168362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26" (OuterVolumeSpecName: "kube-api-access-w4t26") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "kube-api-access-w4t26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.186259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12" (OuterVolumeSpecName: "glance") pod "f065d940-b4ed-430f-8cbb-53e966de69f8" (UID: "f065d940-b4ed-430f-8cbb-53e966de69f8"). InnerVolumeSpecName "pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.201124 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.201557 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6413f57d-8568-4541-9777-75a4b04caf24" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24") on node "crc" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.269355 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.269392 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4t26\" (UniqueName: \"kubernetes.io/projected/f065d940-b4ed-430f-8cbb-53e966de69f8-kube-api-access-w4t26\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.269422 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") on node \"crc\" " Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.302916 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.303129 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12") on node "crc" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.370639 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.676890 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f065d940-b4ed-430f-8cbb-53e966de69f8","Type":"ContainerDied","Data":"2d50f7dc9eba0a31d32a6c7ef969a0fc24977f1bf13a005ab549b08bb637db3f"} Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.678174 4809 scope.go:117] "RemoveContainer" containerID="add3e7537c53543ef50cb101648b46ec2d6a72abc2167b22f50699db40d9700a" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.676959 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.681491 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"826492f3-d5ea-49fd-940a-806eb1c1004d","Type":"ContainerDied","Data":"3c3999c589595e7f306ddd3db69d1a32dffe405ccfba16ff7d9eb14b83b005c6"} Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.681626 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.722172 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.750416 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.760084 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.773640 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.783541 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: E0226 14:40:22.784219 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784245 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: E0226 14:40:22.784274 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784284 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: E0226 14:40:22.784330 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784339 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: E0226 14:40:22.784357 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784364 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784609 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784634 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784659 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" containerName="glance-log" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.784678 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" containerName="glance-httpd" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.788437 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.795268 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.795505 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.795915 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4x9gd" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.796266 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.803344 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.805593 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.809136 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.809340 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.817530 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.828230 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.896995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897117 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897185 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897241 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897262 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:22 crc kubenswrapper[4809]: I0226 14:40:22.897312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tsw\" (UniqueName: \"kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:22.999635 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000135 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000175 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000248 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000316 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000352 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000916 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.000999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001056 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001082 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001212 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001589 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46tsw\" (UniqueName: \"kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001644 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.001743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m644x\" (UniqueName: \"kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.004404 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.004437 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5430fcd2916c7b014ac0286e22544a8396394be2a5cb5110057444f013914bf0/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.009502 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.014161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.017548 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.024904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46tsw\" (UniqueName: \"kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.030141 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.055512 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103602 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103756 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m644x\" (UniqueName: \"kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103815 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103869 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.103955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.104210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.104398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.106718 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.106750 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/290bcfa2d95fcff9bcdffee07cdf41f807340e8945582a9224b291984718a620/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.108416 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.108977 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.110333 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.110934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.112633 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.127465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m644x\" (UniqueName: \"kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.179866 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.435166 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.474192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.611242 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:40:23 crc kubenswrapper[4809]: I0226 14:40:23.611971 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" containerID="cri-o://43e9f84d9a5f8a7da1aadc798f85d0198a07606d11436546c47208d39f37b263" gracePeriod=10 Feb 26 14:40:24 crc kubenswrapper[4809]: I0226 14:40:24.268347 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826492f3-d5ea-49fd-940a-806eb1c1004d" path="/var/lib/kubelet/pods/826492f3-d5ea-49fd-940a-806eb1c1004d/volumes" Feb 26 14:40:24 crc kubenswrapper[4809]: I0226 14:40:24.269492 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f065d940-b4ed-430f-8cbb-53e966de69f8" path="/var/lib/kubelet/pods/f065d940-b4ed-430f-8cbb-53e966de69f8/volumes" Feb 26 14:40:24 crc kubenswrapper[4809]: I0226 14:40:24.710504 4809 generic.go:334] "Generic (PLEG): container finished" podID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerID="43e9f84d9a5f8a7da1aadc798f85d0198a07606d11436546c47208d39f37b263" exitCode=0 Feb 26 14:40:24 crc kubenswrapper[4809]: I0226 14:40:24.710558 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerDied","Data":"43e9f84d9a5f8a7da1aadc798f85d0198a07606d11436546c47208d39f37b263"} Feb 26 14:40:25 crc kubenswrapper[4809]: I0226 14:40:25.202050 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Feb 26 14:40:30 crc kubenswrapper[4809]: I0226 14:40:30.201880 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.201886 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.202409 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.849059 4809 generic.go:334] "Generic (PLEG): container finished" podID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" containerID="f833bb0d99a2999ea15062758a6c24644e0775b1b30bb0681a40b8d788567bc0" exitCode=0 Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.849167 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49c6" event={"ID":"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9","Type":"ContainerDied","Data":"f833bb0d99a2999ea15062758a6c24644e0775b1b30bb0681a40b8d788567bc0"} Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.855951 4809 generic.go:334] "Generic (PLEG): container finished" podID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerID="cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d" exitCode=0 Feb 26 14:40:35 crc kubenswrapper[4809]: I0226 14:40:35.856002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d"} Feb 26 14:40:41 crc kubenswrapper[4809]: E0226 14:40:41.386178 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 14:40:41 crc kubenswrapper[4809]: E0226 14:40:41.386826 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r92sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b89wr_openstack(ddf13b0e-9265-48c1-830b-8f0e59578fcf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:40:41 crc kubenswrapper[4809]: E0226 14:40:41.388001 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b89wr" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.796534 4809 scope.go:117] "RemoveContainer" containerID="7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9" Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.937219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49c6" event={"ID":"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9","Type":"ContainerDied","Data":"942ef78944f133756c680aa4585f41585ca35eb7ef5eabdecc15131ba42acc9e"} Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.937720 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942ef78944f133756c680aa4585f41585ca35eb7ef5eabdecc15131ba42acc9e" Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.940366 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" event={"ID":"103a17c3-ed84-4f31-9ebf-4066c84eb424","Type":"ContainerDied","Data":"1b123f60866a41787f56c923cfb4d3c259e83006ab83c6bac1274f5ea628fde0"} Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.940423 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b123f60866a41787f56c923cfb4d3c259e83006ab83c6bac1274f5ea628fde0" Feb 26 14:40:42 crc kubenswrapper[4809]: I0226 14:40:42.941918 4809 scope.go:117] "RemoveContainer" containerID="1f76858d44bdc4cef403b225a50eac58dfe50327d5dca64496fc41e0fab0d2e7" Feb 26 14:40:42 crc kubenswrapper[4809]: E0226 14:40:42.945080 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9\": container with ID starting with 7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9 not found: ID does not exist" containerID="7af3c3e068fcbc526b3fe7ebe29e1e522b0868cb137eff66abf6116019b58cf9" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.027871 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.040840 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49c6" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.126991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data\") pod \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.127144 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.127176 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle\") pod \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128201 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128271 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128317 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfz9q\" (UniqueName: \"kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128345 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cvd5\" (UniqueName: \"kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5\") pod \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\" (UID: \"fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128461 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.128900 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb\") pod \"103a17c3-ed84-4f31-9ebf-4066c84eb424\" (UID: \"103a17c3-ed84-4f31-9ebf-4066c84eb424\") " Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.133190 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" (UID: "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.134000 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5" (OuterVolumeSpecName: "kube-api-access-5cvd5") pod "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" (UID: "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9"). InnerVolumeSpecName "kube-api-access-5cvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.167390 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q" (OuterVolumeSpecName: "kube-api-access-hfz9q") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "kube-api-access-hfz9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.195419 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.217314 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" (UID: "fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.228855 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config" (OuterVolumeSpecName: "config") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232231 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232631 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232761 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232782 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232795 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232808 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232929 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfz9q\" (UniqueName: \"kubernetes.io/projected/103a17c3-ed84-4f31-9ebf-4066c84eb424-kube-api-access-hfz9q\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.232945 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cvd5\" (UniqueName: \"kubernetes.io/projected/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9-kube-api-access-5cvd5\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.284708 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.326395 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "103a17c3-ed84-4f31-9ebf-4066c84eb424" (UID: "103a17c3-ed84-4f31-9ebf-4066c84eb424"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.336532 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.336998 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/103a17c3-ed84-4f31-9ebf-4066c84eb424-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.348813 4809 scope.go:117] "RemoveContainer" containerID="f1db9d76b13851dbce7f632b7ecfc2eb02506f8c9be60a55ebb8eccfdc735fc8" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.499571 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:40:43 crc kubenswrapper[4809]: W0226 14:40:43.502859 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b0f56e_4e24_4d34_9576_ede63401881a.slice/crio-819a881da657772048c68f2982b2e13df1eb963e21c298e64c4d73905e4779ae WatchSource:0}: Error finding container 819a881da657772048c68f2982b2e13df1eb963e21c298e64c4d73905e4779ae: Status 404 returned error can't find the container with id 819a881da657772048c68f2982b2e13df1eb963e21c298e64c4d73905e4779ae Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.959532 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerStarted","Data":"819a881da657772048c68f2982b2e13df1eb963e21c298e64c4d73905e4779ae"} Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.968541 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-pph48" event={"ID":"84499f28-1908-4654-b0bc-a6961f49bb57","Type":"ContainerStarted","Data":"d38c36ea01dbc600d606b20240fc7ee0ccf280a2cbde879bc96d854b156ff1d2"} Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.970494 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" Feb 26 14:40:43 crc kubenswrapper[4809]: I0226 14:40:43.970510 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49c6" Feb 26 14:40:43 crc kubenswrapper[4809]: E0226 14:40:43.970776 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 26 14:40:43 crc kubenswrapper[4809]: E0226 14:40:43.970991 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mfzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(76b6769a-0dce-4f13-8b83-720ae328c81b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 14:40:43 crc kubenswrapper[4809]: E0226 14:40:43.972200 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.046432 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.058580 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-csfht"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.210922 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:40:44 crc kubenswrapper[4809]: W0226 14:40:44.224141 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod886e92c7_5f48_464f_87d9_4bac65b13ea6.slice/crio-afd283439fa30193e2c2307b84ee0fe770a6c360f68c66309757f5772322b6e2 WatchSource:0}: Error finding container afd283439fa30193e2c2307b84ee0fe770a6c360f68c66309757f5772322b6e2: Status 404 returned error can't find the container with id afd283439fa30193e2c2307b84ee0fe770a6c360f68c66309757f5772322b6e2 Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.282637 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" path="/var/lib/kubelet/pods/103a17c3-ed84-4f31-9ebf-4066c84eb424/volumes" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.385093 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fcbbfdc9-5v7dg"] Feb 26 14:40:44 crc kubenswrapper[4809]: E0226 14:40:44.385731 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" containerName="barbican-db-sync" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.385760 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" containerName="barbican-db-sync" Feb 26 14:40:44 crc kubenswrapper[4809]: E0226 14:40:44.385787 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="init" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.385795 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="init" Feb 26 14:40:44 crc kubenswrapper[4809]: E0226 14:40:44.385816 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.385825 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.386138 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" containerName="barbican-db-sync" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.386175 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.387586 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.391757 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.392189 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.392305 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xkmwc" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.423411 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.425653 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.436349 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.442782 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fcbbfdc9-5v7dg"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.469539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jvl\" (UniqueName: \"kubernetes.io/projected/e87bc3c2-7478-45b4-bd69-5384f71376bd-kube-api-access-22jvl\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.469600 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data-custom\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.469641 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4xc\" (UniqueName: \"kubernetes.io/projected/b61dd9b3-075a-46bd-842c-184e5f02d804-kube-api-access-cs4xc\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.473771 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b61dd9b3-075a-46bd-842c-184e5f02d804-logs\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.473835 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.473954 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-combined-ca-bundle\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.474188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e87bc3c2-7478-45b4-bd69-5384f71376bd-logs\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.474238 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data-custom\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.474283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.474442 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-combined-ca-bundle\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.492047 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576558 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e87bc3c2-7478-45b4-bd69-5384f71376bd-logs\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data-custom\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576666 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576752 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-combined-ca-bundle\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576783 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jvl\" (UniqueName: \"kubernetes.io/projected/e87bc3c2-7478-45b4-bd69-5384f71376bd-kube-api-access-22jvl\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data-custom\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576851 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs4xc\" (UniqueName: \"kubernetes.io/projected/b61dd9b3-075a-46bd-842c-184e5f02d804-kube-api-access-cs4xc\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576960 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b61dd9b3-075a-46bd-842c-184e5f02d804-logs\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.576988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.577079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-combined-ca-bundle\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.579638 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b61dd9b3-075a-46bd-842c-184e5f02d804-logs\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.580150 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e87bc3c2-7478-45b4-bd69-5384f71376bd-logs\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.598886 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data-custom\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.600140 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-config-data\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.600873 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.601299 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-config-data-custom\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.602096 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61dd9b3-075a-46bd-842c-184e5f02d804-combined-ca-bundle\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.603647 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jvl\" (UniqueName: \"kubernetes.io/projected/e87bc3c2-7478-45b4-bd69-5384f71376bd-kube-api-access-22jvl\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.623605 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs4xc\" (UniqueName: \"kubernetes.io/projected/b61dd9b3-075a-46bd-842c-184e5f02d804-kube-api-access-cs4xc\") pod \"barbican-keystone-listener-5ddb7b7cf6-jq45v\" (UID: \"b61dd9b3-075a-46bd-842c-184e5f02d804\") " pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.627696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87bc3c2-7478-45b4-bd69-5384f71376bd-combined-ca-bundle\") pod \"barbican-worker-5fcbbfdc9-5v7dg\" (UID: \"e87bc3c2-7478-45b4-bd69-5384f71376bd\") " pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.637078 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.639148 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680399 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680461 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680550 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680572 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xnh\" (UniqueName: \"kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680586 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.680614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.718173 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.739289 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.791883 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794510 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xnh\" (UniqueName: \"kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794536 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.794794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.795930 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.800476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.802608 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.802965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.803208 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.825057 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xnh\" (UniqueName: \"kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh\") pod \"dnsmasq-dns-59d5ff467f-jnql5\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.830037 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.831930 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.836465 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.877067 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.897577 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.897738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.897792 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.897842 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnbm2\" (UniqueName: \"kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.897987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.985788 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerStarted","Data":"8549972de902dfac038d362fa4c9f8ae04a7dacfc0868f75a91ef1c9e089a614"} Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.987334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerStarted","Data":"afd283439fa30193e2c2307b84ee0fe770a6c360f68c66309757f5772322b6e2"} Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.987547 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="ceilometer-notification-agent" containerID="cri-o://ded58b525fad1eebc13454b529958aee7717ff5bad17b1e87758ea615c638b5c" gracePeriod=30 Feb 26 14:40:44 crc kubenswrapper[4809]: I0226 14:40:44.987702 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="sg-core" containerID="cri-o://c93288e727859bd3af156a0e8d95f76eab39db2f557c98bd8c8fc7da5af1dd90" gracePeriod=30 Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.000250 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.000327 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.000386 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnbm2\" (UniqueName: \"kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.000547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.000595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.001051 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.004535 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.005291 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.005839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.029934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnbm2\" (UniqueName: \"kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2\") pod \"barbican-api-665d7899fd-v7m65\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.039248 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.042237 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-pph48" podStartSLOduration=4.687326336 podStartE2EDuration="1m17.0422233s" podCreationTimestamp="2026-02-26 14:39:28 +0000 UTC" firstStartedPulling="2026-02-26 14:39:30.462553325 +0000 UTC m=+1548.935873848" lastFinishedPulling="2026-02-26 14:40:42.817450289 +0000 UTC m=+1621.290770812" observedRunningTime="2026-02-26 14:40:45.038300729 +0000 UTC m=+1623.511621252" watchObservedRunningTime="2026-02-26 14:40:45.0422233 +0000 UTC m=+1623.515543843" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.202587 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-csfht" podUID="103a17c3-ed84-4f31-9ebf-4066c84eb424" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: i/o timeout" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.245267 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:45 crc kubenswrapper[4809]: I0226 14:40:45.834883 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:40:45 crc kubenswrapper[4809]: W0226 14:40:45.869335 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2aabbfa_cdfb_4b2e_929c_362abd1f61bd.slice/crio-ecfebfb19468c16c47d026fb2cbe1dd98938edd78e7623337c0573677aaa8879 WatchSource:0}: Error finding container ecfebfb19468c16c47d026fb2cbe1dd98938edd78e7623337c0573677aaa8879: Status 404 returned error can't find the container with id ecfebfb19468c16c47d026fb2cbe1dd98938edd78e7623337c0573677aaa8879 Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.024437 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fcbbfdc9-5v7dg"] Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.062476 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v"] Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.177816 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.185416 4809 generic.go:334] "Generic (PLEG): container finished" podID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerID="c93288e727859bd3af156a0e8d95f76eab39db2f557c98bd8c8fc7da5af1dd90" exitCode=2 Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.185489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerDied","Data":"c93288e727859bd3af156a0e8d95f76eab39db2f557c98bd8c8fc7da5af1dd90"} Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.197605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerStarted","Data":"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b"} Feb 26 14:40:46 crc kubenswrapper[4809]: I0226 14:40:46.200389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerStarted","Data":"ecfebfb19468c16c47d026fb2cbe1dd98938edd78e7623337c0573677aaa8879"} Feb 26 14:40:46 crc kubenswrapper[4809]: W0226 14:40:46.200946 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e242fdf_1367_4075_a023_a70b7cdde477.slice/crio-392f8826395138476c4fe39565865e8a284afc3653ecb8acc915b84593c166a8 WatchSource:0}: Error finding container 392f8826395138476c4fe39565865e8a284afc3653ecb8acc915b84593c166a8: Status 404 returned error can't find the container with id 392f8826395138476c4fe39565865e8a284afc3653ecb8acc915b84593c166a8 Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.212050 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerStarted","Data":"30fb366d73131480563047ff4964711bdaab2db3c6a45c739bd5652cf3ce9e7d"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.214109 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerStarted","Data":"b93f8f4a9d43b5da6538855925f643937ae169dddf139b68bf41ca41edc8ea54"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.215446 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" event={"ID":"b61dd9b3-075a-46bd-842c-184e5f02d804","Type":"ContainerStarted","Data":"4b125bc30f5b7cbac176d7727b735a4989e687e56214e95702a53991c4bc0901"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.217030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" event={"ID":"e87bc3c2-7478-45b4-bd69-5384f71376bd","Type":"ContainerStarted","Data":"c46a2bbe1e7ca6dae80645589ce981684bcef21e23fbd699bc2d6042f64f13b9"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.218983 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerStarted","Data":"8a888d4b56d8c54b88b03a5317f245a806845b64a731dc17cd811b19f376d062"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.219034 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerStarted","Data":"392f8826395138476c4fe39565865e8a284afc3653ecb8acc915b84593c166a8"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.220997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerStarted","Data":"79a9eebe02422f3d3a7746a343b67bd18a589ac6384fdd2f0ca7b94fd5ce302b"} Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.246401 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.246383518 podStartE2EDuration="25.246383518s" podCreationTimestamp="2026-02-26 14:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:47.237529136 +0000 UTC m=+1625.710849659" watchObservedRunningTime="2026-02-26 14:40:47.246383518 +0000 UTC m=+1625.719704041" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.769200 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-694b9cc8b4-9gcrr"] Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.775463 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.778837 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.779097 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.829420 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-694b9cc8b4-9gcrr"] Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.881801 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-internal-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.881894 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-combined-ca-bundle\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.882115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.882193 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-public-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.882265 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5qk\" (UniqueName: \"kubernetes.io/projected/861702ed-9e3e-4321-bd9e-3059edb13cc3-kube-api-access-7b5qk\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.882306 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data-custom\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.882342 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861702ed-9e3e-4321-bd9e-3059edb13cc3-logs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985117 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-combined-ca-bundle\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985305 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-public-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b5qk\" (UniqueName: \"kubernetes.io/projected/861702ed-9e3e-4321-bd9e-3059edb13cc3-kube-api-access-7b5qk\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985609 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data-custom\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861702ed-9e3e-4321-bd9e-3059edb13cc3-logs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.985752 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-internal-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.988699 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/861702ed-9e3e-4321-bd9e-3059edb13cc3-logs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.991060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data-custom\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.991465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-public-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.996604 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-combined-ca-bundle\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.998623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-config-data\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:47 crc kubenswrapper[4809]: I0226 14:40:47.999325 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/861702ed-9e3e-4321-bd9e-3059edb13cc3-internal-tls-certs\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:48 crc kubenswrapper[4809]: I0226 14:40:48.008710 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b5qk\" (UniqueName: \"kubernetes.io/projected/861702ed-9e3e-4321-bd9e-3059edb13cc3-kube-api-access-7b5qk\") pod \"barbican-api-694b9cc8b4-9gcrr\" (UID: \"861702ed-9e3e-4321-bd9e-3059edb13cc3\") " pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:48 crc kubenswrapper[4809]: I0226 14:40:48.117180 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:48 crc kubenswrapper[4809]: I0226 14:40:48.286317 4809 generic.go:334] "Generic (PLEG): container finished" podID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerID="30fb366d73131480563047ff4964711bdaab2db3c6a45c739bd5652cf3ce9e7d" exitCode=0 Feb 26 14:40:48 crc kubenswrapper[4809]: I0226 14:40:48.304113 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerDied","Data":"30fb366d73131480563047ff4964711bdaab2db3c6a45c739bd5652cf3ce9e7d"} Feb 26 14:40:48 crc kubenswrapper[4809]: I0226 14:40:48.705260 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-694b9cc8b4-9gcrr"] Feb 26 14:40:49 crc kubenswrapper[4809]: I0226 14:40:49.321193 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-694b9cc8b4-9gcrr" event={"ID":"861702ed-9e3e-4321-bd9e-3059edb13cc3","Type":"ContainerStarted","Data":"139d30bb022231e6e03e5e3cfe982f9bcfc78edff4e0cb43cc0e1414c592c497"} Feb 26 14:40:50 crc kubenswrapper[4809]: I0226 14:40:50.741362 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:40:50 crc kubenswrapper[4809]: I0226 14:40:50.742045 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.349633 4809 generic.go:334] "Generic (PLEG): container finished" podID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerID="ded58b525fad1eebc13454b529958aee7717ff5bad17b1e87758ea615c638b5c" exitCode=0 Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.349905 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerDied","Data":"ded58b525fad1eebc13454b529958aee7717ff5bad17b1e87758ea615c638b5c"} Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.355812 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerStarted","Data":"c2322e311c9fd623aacef35e3de6c90e000a13dabdc75a73e5bebb3af4be7af6"} Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.357320 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.357652 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.359160 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: connect: connection refused" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.363857 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerStarted","Data":"22451bf8e50243eb6c84a6b6617932505ef476c54436fecef2b95cf294ef3fe1"} Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.364192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.380959 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerStarted","Data":"ffe71b0048809032c3a86b2d95a3454513fa7981ff8429d870bad86f6812a6d2"} Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.384612 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-665d7899fd-v7m65" podStartSLOduration=7.384594411 podStartE2EDuration="7.384594411s" podCreationTimestamp="2026-02-26 14:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:51.378941101 +0000 UTC m=+1629.852261624" watchObservedRunningTime="2026-02-26 14:40:51.384594411 +0000 UTC m=+1629.857914934" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.407801 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" podStartSLOduration=7.40778249 podStartE2EDuration="7.40778249s" podCreationTimestamp="2026-02-26 14:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:51.40640151 +0000 UTC m=+1629.879722033" watchObservedRunningTime="2026-02-26 14:40:51.40778249 +0000 UTC m=+1629.881103013" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.411366 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-694b9cc8b4-9gcrr" event={"ID":"861702ed-9e3e-4321-bd9e-3059edb13cc3","Type":"ContainerStarted","Data":"b7a222f546718080c3aab848f2936fa8d253081d6c860cfb3b0afaa2a4d8ef26"} Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.435580 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.435563268 podStartE2EDuration="29.435563268s" podCreationTimestamp="2026-02-26 14:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:51.434140758 +0000 UTC m=+1629.907461281" watchObservedRunningTime="2026-02-26 14:40:51.435563268 +0000 UTC m=+1629.908883791" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.549603 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.566477 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.590673 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvqmb"] Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.593349 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.614360 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvqmb"] Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.623572 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.665584 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c65dd4586-db9jl" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.708539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-catalog-content\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.708709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfsg\" (UniqueName: \"kubernetes.io/projected/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-kube-api-access-7gfsg\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.708738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-utilities\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.790743 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.810423 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-catalog-content\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.810506 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfsg\" (UniqueName: \"kubernetes.io/projected/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-kube-api-access-7gfsg\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.810526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-utilities\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.810956 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-utilities\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.811362 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-catalog-content\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.851395 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:40:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:40:51 crc kubenswrapper[4809]: > Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.861570 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfsg\" (UniqueName: \"kubernetes.io/projected/5863bb93-7ab4-4326-b1fa-e4f1d5d920e2-kube-api-access-7gfsg\") pod \"certified-operators-rvqmb\" (UID: \"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2\") " pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:51 crc kubenswrapper[4809]: I0226 14:40:51.946715 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.113250 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.113920 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.113934 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.113944 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.184455 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.203632 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.222622 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.261943 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262248 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262488 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262659 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mfzj\" (UniqueName: \"kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262797 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262949 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.263130 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle\") pod \"76b6769a-0dce-4f13-8b83-720ae328c81b\" (UID: \"76b6769a-0dce-4f13-8b83-720ae328c81b\") " Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.262956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.263434 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.264040 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.264131 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76b6769a-0dce-4f13-8b83-720ae328c81b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.271554 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts" (OuterVolumeSpecName: "scripts") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.272080 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj" (OuterVolumeSpecName: "kube-api-access-7mfzj") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "kube-api-access-7mfzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.356634 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.356953 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.357390 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data" (OuterVolumeSpecName: "config-data") pod "76b6769a-0dce-4f13-8b83-720ae328c81b" (UID: "76b6769a-0dce-4f13-8b83-720ae328c81b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.366896 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mfzj\" (UniqueName: \"kubernetes.io/projected/76b6769a-0dce-4f13-8b83-720ae328c81b-kube-api-access-7mfzj\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.366947 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.366957 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.366966 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.366975 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76b6769a-0dce-4f13-8b83-720ae328c81b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.435941 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.437711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.437745 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.437820 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.492679 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.495433 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.500075 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76b6769a-0dce-4f13-8b83-720ae328c81b","Type":"ContainerDied","Data":"e83a20346388e28ccd118dcaaf82d6d94eb288781b3a2bdf034f5f49090ad153"} Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.500143 4809 scope.go:117] "RemoveContainer" containerID="c93288e727859bd3af156a0e8d95f76eab39db2f557c98bd8c8fc7da5af1dd90" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.500380 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.501046 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-595b9697cb-h9llc" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-log" containerID="cri-o://f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4" gracePeriod=30 Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.501283 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-595b9697cb-h9llc" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-api" containerID="cri-o://0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48" gracePeriod=30 Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.636484 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.650687 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.662235 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:40:53 crc kubenswrapper[4809]: E0226 14:40:53.662808 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="sg-core" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.662827 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="sg-core" Feb 26 14:40:53 crc kubenswrapper[4809]: E0226 14:40:53.662878 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="ceilometer-notification-agent" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.662884 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="ceilometer-notification-agent" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.663208 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="ceilometer-notification-agent" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.663229 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" containerName="sg-core" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.665628 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.667657 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.667901 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.674949 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.778721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.778799 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.778990 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.779186 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.779269 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.779311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.779331 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g44z\" (UniqueName: \"kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881424 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881537 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881602 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881626 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.881640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g44z\" (UniqueName: \"kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.882531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.882531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.886611 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.888340 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.888392 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.889155 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.901901 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g44z\" (UniqueName: \"kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z\") pod \"ceilometer-0\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " pod="openstack/ceilometer-0" Feb 26 14:40:53 crc kubenswrapper[4809]: I0226 14:40:53.985966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.058029 4809 scope.go:117] "RemoveContainer" containerID="ded58b525fad1eebc13454b529958aee7717ff5bad17b1e87758ea615c638b5c" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.069007 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b67cbb9bb-wjj2d" Feb 26 14:40:54 crc kubenswrapper[4809]: E0226 14:40:54.262412 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b89wr" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.295177 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b6769a-0dce-4f13-8b83-720ae328c81b" path="/var/lib/kubelet/pods/76b6769a-0dce-4f13-8b83-720ae328c81b/volumes" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.627195 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerID="f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4" exitCode=143 Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.627605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerDied","Data":"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4"} Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.642728 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" event={"ID":"b61dd9b3-075a-46bd-842c-184e5f02d804","Type":"ContainerStarted","Data":"46e97f1c47d90eb8a3b97e13f76fc5146c86ff649f89c1fea273d5daee34af6b"} Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.644882 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-694b9cc8b4-9gcrr" event={"ID":"861702ed-9e3e-4321-bd9e-3059edb13cc3","Type":"ContainerStarted","Data":"4bb4733233491584e8b512d31f60c90eb9b2e652c998ae6e5065ddcfa48f5288"} Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.646628 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.646661 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.668556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" event={"ID":"e87bc3c2-7478-45b4-bd69-5384f71376bd","Type":"ContainerStarted","Data":"e8e1c50daab8974a8c14a1a3364a48ea392ce8e7e1cde1fb8a01b8899913ff68"} Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.670714 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvqmb"] Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.686720 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-694b9cc8b4-9gcrr" podStartSLOduration=7.686544389 podStartE2EDuration="7.686544389s" podCreationTimestamp="2026-02-26 14:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:40:54.669647919 +0000 UTC m=+1633.142968462" watchObservedRunningTime="2026-02-26 14:40:54.686544389 +0000 UTC m=+1633.159864952" Feb 26 14:40:54 crc kubenswrapper[4809]: I0226 14:40:54.794360 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:40:54 crc kubenswrapper[4809]: W0226 14:40:54.834183 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866fafde_caa3_46bd_bcbb_1361d47e7789.slice/crio-427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d WatchSource:0}: Error finding container 427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d: Status 404 returned error can't find the container with id 427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.679203 4809 generic.go:334] "Generic (PLEG): container finished" podID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerID="fcf7f8bdb45f594884459da9a4c758c04e6be0b3d9a19c92c6a776d0f7137ca2" exitCode=0 Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.679294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvqmb" event={"ID":"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2","Type":"ContainerDied","Data":"fcf7f8bdb45f594884459da9a4c758c04e6be0b3d9a19c92c6a776d0f7137ca2"} Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.680451 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvqmb" event={"ID":"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2","Type":"ContainerStarted","Data":"9e96c32203eae4185967a3a93f7ca1e1e1983c3a956382918aa21196913019ce"} Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.683827 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" event={"ID":"e87bc3c2-7478-45b4-bd69-5384f71376bd","Type":"ContainerStarted","Data":"da0ec2e1c868393261d20758b7dd6330e7d7c6605a534daecc4d1c29d407e769"} Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.686707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerStarted","Data":"427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d"} Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.688490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" event={"ID":"b61dd9b3-075a-46bd-842c-184e5f02d804","Type":"ContainerStarted","Data":"968eb682b9fd38eafdddfa981239c58a29531cbb762e4f095f4b2e208ee1f527"} Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.767568 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5ddb7b7cf6-jq45v" podStartSLOduration=3.81773587 podStartE2EDuration="11.767549293s" podCreationTimestamp="2026-02-26 14:40:44 +0000 UTC" firstStartedPulling="2026-02-26 14:40:46.114410905 +0000 UTC m=+1624.587731428" lastFinishedPulling="2026-02-26 14:40:54.064224328 +0000 UTC m=+1632.537544851" observedRunningTime="2026-02-26 14:40:55.722335409 +0000 UTC m=+1634.195655932" watchObservedRunningTime="2026-02-26 14:40:55.767549293 +0000 UTC m=+1634.240869816" Feb 26 14:40:55 crc kubenswrapper[4809]: I0226 14:40:55.801251 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fcbbfdc9-5v7dg" podStartSLOduration=3.875553141 podStartE2EDuration="11.801228189s" podCreationTimestamp="2026-02-26 14:40:44 +0000 UTC" firstStartedPulling="2026-02-26 14:40:46.140896367 +0000 UTC m=+1624.614216880" lastFinishedPulling="2026-02-26 14:40:54.066571405 +0000 UTC m=+1632.539891928" observedRunningTime="2026-02-26 14:40:55.748915534 +0000 UTC m=+1634.222236057" watchObservedRunningTime="2026-02-26 14:40:55.801228189 +0000 UTC m=+1634.274548722" Feb 26 14:40:56 crc kubenswrapper[4809]: I0226 14:40:56.702678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerStarted","Data":"f81bcff20e2b62765648f64ff5526a26729017c403c307eb7b2eef5f41d360d4"} Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.414173 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.458192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.503911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504582 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504638 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504698 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504769 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504854 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.504953 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs\") pod \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\" (UID: \"d8ad7932-ca7f-4db5-baec-e9f6be8f211b\") " Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.506327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs" (OuterVolumeSpecName: "logs") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.518927 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4" (OuterVolumeSpecName: "kube-api-access-2mgf4") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "kube-api-access-2mgf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.529271 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-kube-api-access-2mgf4\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.530305 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.544205 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts" (OuterVolumeSpecName: "scripts") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.662832 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.663027 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.755139 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data" (OuterVolumeSpecName: "config-data") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.773200 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.773238 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.788585 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerID="0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48" exitCode=0 Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.788681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerDied","Data":"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48"} Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.788709 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-595b9697cb-h9llc" event={"ID":"d8ad7932-ca7f-4db5-baec-e9f6be8f211b","Type":"ContainerDied","Data":"321a6c005b3fb4566c8bf7fadc5904207eda78372bc0aa4b74e06c4441d0677f"} Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.788724 4809 scope.go:117] "RemoveContainer" containerID="0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.788872 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-595b9697cb-h9llc" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.815034 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerStarted","Data":"87a4ae544ed7684b8eedfe09829a2cd5d032fecb13092c2fe6c0ac13ffbc1176"} Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.871211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.875810 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.879211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8ad7932-ca7f-4db5-baec-e9f6be8f211b" (UID: "d8ad7932-ca7f-4db5-baec-e9f6be8f211b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:40:57 crc kubenswrapper[4809]: I0226 14:40:57.980919 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ad7932-ca7f-4db5-baec-e9f6be8f211b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.024669 4809 scope.go:117] "RemoveContainer" containerID="f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.061364 4809 scope.go:117] "RemoveContainer" containerID="0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48" Feb 26 14:40:58 crc kubenswrapper[4809]: E0226 14:40:58.065059 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48\": container with ID starting with 0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48 not found: ID does not exist" containerID="0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.065108 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48"} err="failed to get container status \"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48\": rpc error: code = NotFound desc = could not find container \"0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48\": container with ID starting with 0752248e69578da0c6e90f48fc4d7280630bd7d28234b3b72a4ace7945a47e48 not found: ID does not exist" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.065138 4809 scope.go:117] "RemoveContainer" containerID="f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4" Feb 26 14:40:58 crc kubenswrapper[4809]: E0226 14:40:58.065799 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4\": container with ID starting with f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4 not found: ID does not exist" containerID="f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.065862 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4"} err="failed to get container status \"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4\": rpc error: code = NotFound desc = could not find container \"f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4\": container with ID starting with f969c68ad1a003206fd663a8e29ebd796cd3378c903a8b3fe31108e08a7795d4 not found: ID does not exist" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.137606 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.159514 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-595b9697cb-h9llc"] Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.279325 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" path="/var/lib/kubelet/pods/d8ad7932-ca7f-4db5-baec-e9f6be8f211b/volumes" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.719653 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 14:40:58 crc kubenswrapper[4809]: E0226 14:40:58.720442 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-log" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.720461 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-log" Feb 26 14:40:58 crc kubenswrapper[4809]: E0226 14:40:58.720494 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-api" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.720501 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-api" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.720729 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-api" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.720768 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ad7932-ca7f-4db5-baec-e9f6be8f211b" containerName="placement-log" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.721550 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.724213 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.724403 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fzdg2" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.724542 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.739455 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.798860 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.798946 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.798998 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.799436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kl8\" (UniqueName: \"kubernetes.io/projected/a79dbedd-3475-4279-9c37-9add895fd0e1-kube-api-access-t6kl8\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.861428 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.861528 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.861858 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.864444 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.869298 4809 generic.go:334] "Generic (PLEG): container finished" podID="06597a2e-41b4-4d56-bed1-0cb73516bee0" containerID="a951219fdc2d9e5434d52ccc402f1c9691290b16f1d5fab63fe961e081b6e8d7" exitCode=0 Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.869376 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7cnn" event={"ID":"06597a2e-41b4-4d56-bed1-0cb73516bee0","Type":"ContainerDied","Data":"a951219fdc2d9e5434d52ccc402f1c9691290b16f1d5fab63fe961e081b6e8d7"} Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.877948 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerStarted","Data":"a0b1a78fa0070b44aaf2fc3035ad2404387a97825086f49ce02092bb1ccb9262"} Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.901639 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.901723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.901761 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.901985 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kl8\" (UniqueName: \"kubernetes.io/projected/a79dbedd-3475-4279-9c37-9add895fd0e1-kube-api-access-t6kl8\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.903604 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.910960 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.948988 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kl8\" (UniqueName: \"kubernetes.io/projected/a79dbedd-3475-4279-9c37-9add895fd0e1-kube-api-access-t6kl8\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:58 crc kubenswrapper[4809]: I0226 14:40:58.964919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a79dbedd-3475-4279-9c37-9add895fd0e1-openstack-config-secret\") pod \"openstackclient\" (UID: \"a79dbedd-3475-4279-9c37-9add895fd0e1\") " pod="openstack/openstackclient" Feb 26 14:40:59 crc kubenswrapper[4809]: I0226 14:40:59.050451 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 14:40:59 crc kubenswrapper[4809]: I0226 14:40:59.639967 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 14:40:59 crc kubenswrapper[4809]: I0226 14:40:59.674531 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:40:59 crc kubenswrapper[4809]: I0226 14:40:59.892197 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a79dbedd-3475-4279-9c37-9add895fd0e1","Type":"ContainerStarted","Data":"e79fc795e012f6dd293116bf67348a6b4a8ed69cd2802a60f5a1e17b38b13902"} Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.041154 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.132990 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.133423 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="dnsmasq-dns" containerID="cri-o://e5a8823a60e155be5c07f15d753fb5e333ae7bbe38a8557b31b08728c775240c" gracePeriod=10 Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.941213 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.967138 4809 generic.go:334] "Generic (PLEG): container finished" podID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerID="e5a8823a60e155be5c07f15d753fb5e333ae7bbe38a8557b31b08728c775240c" exitCode=0 Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.967239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerDied","Data":"e5a8823a60e155be5c07f15d753fb5e333ae7bbe38a8557b31b08728c775240c"} Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.983870 4809 generic.go:334] "Generic (PLEG): container finished" podID="84499f28-1908-4654-b0bc-a6961f49bb57" containerID="d38c36ea01dbc600d606b20240fc7ee0ccf280a2cbde879bc96d854b156ff1d2" exitCode=0 Feb 26 14:41:00 crc kubenswrapper[4809]: I0226 14:41:00.983967 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-pph48" event={"ID":"84499f28-1908-4654-b0bc-a6961f49bb57","Type":"ContainerDied","Data":"d38c36ea01dbc600d606b20240fc7ee0ccf280a2cbde879bc96d854b156ff1d2"} Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:00.999704 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") pod \"06597a2e-41b4-4d56-bed1-0cb73516bee0\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.005412 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle\") pod \"06597a2e-41b4-4d56-bed1-0cb73516bee0\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.005524 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd\") pod \"06597a2e-41b4-4d56-bed1-0cb73516bee0\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.034239 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd" (OuterVolumeSpecName: "kube-api-access-hnwvd") pod "06597a2e-41b4-4d56-bed1-0cb73516bee0" (UID: "06597a2e-41b4-4d56-bed1-0cb73516bee0"). InnerVolumeSpecName "kube-api-access-hnwvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.035665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7cnn" event={"ID":"06597a2e-41b4-4d56-bed1-0cb73516bee0","Type":"ContainerDied","Data":"e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c"} Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.035705 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e929aca96c4f7fb608732bc767bcff20e4c59ceaae2cf09f0866e0ce0e296b9c" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.035757 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7cnn" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.055630 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06597a2e-41b4-4d56-bed1-0cb73516bee0" (UID: "06597a2e-41b4-4d56-bed1-0cb73516bee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.107219 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config" (OuterVolumeSpecName: "config") pod "06597a2e-41b4-4d56-bed1-0cb73516bee0" (UID: "06597a2e-41b4-4d56-bed1-0cb73516bee0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.107857 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") pod \"06597a2e-41b4-4d56-bed1-0cb73516bee0\" (UID: \"06597a2e-41b4-4d56-bed1-0cb73516bee0\") " Feb 26 14:41:01 crc kubenswrapper[4809]: W0226 14:41:01.111280 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/06597a2e-41b4-4d56-bed1-0cb73516bee0/volumes/kubernetes.io~secret/config Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.111325 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config" (OuterVolumeSpecName: "config") pod "06597a2e-41b4-4d56-bed1-0cb73516bee0" (UID: "06597a2e-41b4-4d56-bed1-0cb73516bee0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.116313 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.116348 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06597a2e-41b4-4d56-bed1-0cb73516bee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.116361 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnwvd\" (UniqueName: \"kubernetes.io/projected/06597a2e-41b4-4d56-bed1-0cb73516bee0-kube-api-access-hnwvd\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.821721 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:01 crc kubenswrapper[4809]: > Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.828153 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955276 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spwp2\" (UniqueName: \"kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955379 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955490 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955585 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955637 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.955799 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config\") pod \"be00c4f5-1553-47ad-874a-09ede8eb494e\" (UID: \"be00c4f5-1553-47ad-874a-09ede8eb494e\") " Feb 26 14:41:01 crc kubenswrapper[4809]: I0226 14:41:01.981221 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2" (OuterVolumeSpecName: "kube-api-access-spwp2") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "kube-api-access-spwp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.058795 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.060110 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.060152 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spwp2\" (UniqueName: \"kubernetes.io/projected/be00c4f5-1553-47ad-874a-09ede8eb494e-kube-api-access-spwp2\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.074126 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.105578 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config" (OuterVolumeSpecName: "config") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.107873 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.108846 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-ld5zt" event={"ID":"be00c4f5-1553-47ad-874a-09ede8eb494e","Type":"ContainerDied","Data":"5e2f054c3eb4d04c9dac12551ea0a5e22392a1aca243ac3941c254a424424bf1"} Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.108892 4809 scope.go:117] "RemoveContainer" containerID="e5a8823a60e155be5c07f15d753fb5e333ae7bbe38a8557b31b08728c775240c" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.152789 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.166071 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.166095 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.166104 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.168828 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be00c4f5-1553-47ad-874a-09ede8eb494e" (UID: "be00c4f5-1553-47ad-874a-09ede8eb494e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.235753 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:02 crc kubenswrapper[4809]: E0226 14:41:02.236442 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="dnsmasq-dns" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.236460 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="dnsmasq-dns" Feb 26 14:41:02 crc kubenswrapper[4809]: E0226 14:41:02.236480 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06597a2e-41b4-4d56-bed1-0cb73516bee0" containerName="neutron-db-sync" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.236488 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="06597a2e-41b4-4d56-bed1-0cb73516bee0" containerName="neutron-db-sync" Feb 26 14:41:02 crc kubenswrapper[4809]: E0226 14:41:02.236539 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="init" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.236548 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="init" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.236833 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" containerName="dnsmasq-dns" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.236867 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="06597a2e-41b4-4d56-bed1-0cb73516bee0" containerName="neutron-db-sync" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.238656 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.270886 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be00c4f5-1553-47ad-874a-09ede8eb494e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.316293 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.388539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.388831 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.389073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.389197 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.389304 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bnf\" (UniqueName: \"kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.389522 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.390050 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.392951 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.400199 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.400552 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rmtfm" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.401003 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.401369 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.417930 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.435449 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493103 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493181 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493344 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlf2\" (UniqueName: \"kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493378 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9bnf\" (UniqueName: \"kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493608 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493666 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493689 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.493801 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.495846 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.496275 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.496429 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.496841 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.496898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.512760 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.538951 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9bnf\" (UniqueName: \"kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf\") pod \"dnsmasq-dns-75c8ddd69c-m72ml\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.543240 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.543982 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.548108 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-ld5zt"] Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.591363 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.610728 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.610784 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.610962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.611055 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.611154 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlf2\" (UniqueName: \"kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.636056 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.637096 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.638242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.639779 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.681483 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.703274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlf2\" (UniqueName: \"kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2\") pod \"neutron-6f84bb7b56-576q9\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.725797 4809 scope.go:117] "RemoveContainer" containerID="8480b241e94427a84f2387e3a1498b9c2bd4e481e2660bcc4f13d34a81953c00" Feb 26 14:41:02 crc kubenswrapper[4809]: I0226 14:41:02.745289 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.464978 4809 scope.go:117] "RemoveContainer" containerID="9b40dbb10d9794c95ad44582665290af05af0154d7f4bb27e6681617d397d82e" Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.711364 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-pph48" Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.914412 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data\") pod \"84499f28-1908-4654-b0bc-a6961f49bb57\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.914554 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle\") pod \"84499f28-1908-4654-b0bc-a6961f49bb57\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.914610 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjmb\" (UniqueName: \"kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb\") pod \"84499f28-1908-4654-b0bc-a6961f49bb57\" (UID: \"84499f28-1908-4654-b0bc-a6961f49bb57\") " Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.963353 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb" (OuterVolumeSpecName: "kube-api-access-4vjmb") pod "84499f28-1908-4654-b0bc-a6961f49bb57" (UID: "84499f28-1908-4654-b0bc-a6961f49bb57"). InnerVolumeSpecName "kube-api-access-4vjmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:03 crc kubenswrapper[4809]: I0226 14:41:03.997856 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-694b9cc8b4-9gcrr" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.014383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84499f28-1908-4654-b0bc-a6961f49bb57" (UID: "84499f28-1908-4654-b0bc-a6961f49bb57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.023674 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.023696 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjmb\" (UniqueName: \"kubernetes.io/projected/84499f28-1908-4654-b0bc-a6961f49bb57-kube-api-access-4vjmb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.217852 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.218096 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" containerID="cri-o://8a888d4b56d8c54b88b03a5317f245a806845b64a731dc17cd811b19f376d062" gracePeriod=30 Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.218572 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" containerID="cri-o://c2322e311c9fd623aacef35e3de6c90e000a13dabdc75a73e5bebb3af4be7af6" gracePeriod=30 Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.227619 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data" (OuterVolumeSpecName: "config-data") pod "84499f28-1908-4654-b0bc-a6961f49bb57" (UID: "84499f28-1908-4654-b0bc-a6961f49bb57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.250750 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84499f28-1908-4654-b0bc-a6961f49bb57-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.263075 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": EOF" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.280838 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": EOF" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.304037 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be00c4f5-1553-47ad-874a-09ede8eb494e" path="/var/lib/kubelet/pods/be00c4f5-1553-47ad-874a-09ede8eb494e/volumes" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.379775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-pph48" event={"ID":"84499f28-1908-4654-b0bc-a6961f49bb57","Type":"ContainerDied","Data":"2b9135cd164bd4b97125d1157d621757691fd14490057a15d5892d39d0505a6a"} Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.379819 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9135cd164bd4b97125d1157d621757691fd14490057a15d5892d39d0505a6a" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.379907 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-pph48" Feb 26 14:41:04 crc kubenswrapper[4809]: I0226 14:41:04.663988 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.195165 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6455866c87-pbhmh"] Feb 26 14:41:05 crc kubenswrapper[4809]: E0226 14:41:05.196118 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" containerName="heat-db-sync" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.196134 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" containerName="heat-db-sync" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.206064 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" containerName="heat-db-sync" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.208296 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.213712 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.213905 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.231796 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6455866c87-pbhmh"] Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.307488 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.323336 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-694b9cc8b4-9gcrr" podUID="861702ed-9e3e-4321-bd9e-3059edb13cc3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.325830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-httpd-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.325973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.326086 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfrw\" (UniqueName: \"kubernetes.io/projected/20736840-781c-4149-9398-481eb42d293b-kube-api-access-9bfrw\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.326160 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-ovndb-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.335459 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.358673 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-public-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.377357 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-internal-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.377724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-combined-ca-bundle\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.419207 4809 generic.go:334] "Generic (PLEG): container finished" podID="6e242fdf-1367-4075-a023-a70b7cdde477" containerID="8a888d4b56d8c54b88b03a5317f245a806845b64a731dc17cd811b19f376d062" exitCode=143 Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.419526 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerDied","Data":"8a888d4b56d8c54b88b03a5317f245a806845b64a731dc17cd811b19f376d062"} Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.430302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" event={"ID":"c1340cfd-c74a-46d2-ac15-b488ffc4a579","Type":"ContainerStarted","Data":"65b1473ed607e77935e087fb88a9ace7a27fe566d4649fc43955fdde0cc3d66c"} Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.467864 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerStarted","Data":"bc8342a3e828a5c979876227e1d13c982413de9d05a3c986824bf6ab5db373e3"} Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.469897 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.477887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerStarted","Data":"ab5713db2fdd85289cf3e3a089a36f9ca09c488726d73cda24b41424864cb20e"} Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.481719 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-httpd-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.481880 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.481984 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bfrw\" (UniqueName: \"kubernetes.io/projected/20736840-781c-4149-9398-481eb42d293b-kube-api-access-9bfrw\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.482006 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-ovndb-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.482200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-public-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.482318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-internal-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.482364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-combined-ca-bundle\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.489998 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-combined-ca-bundle\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.525555 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-public-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.527543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.531957 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-internal-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.537080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-ovndb-tls-certs\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.537497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20736840-781c-4149-9398-481eb42d293b-httpd-config\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.545926 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.698673534 podStartE2EDuration="12.545908678s" podCreationTimestamp="2026-02-26 14:40:53 +0000 UTC" firstStartedPulling="2026-02-26 14:40:54.836461966 +0000 UTC m=+1633.309782489" lastFinishedPulling="2026-02-26 14:41:03.68369711 +0000 UTC m=+1642.157017633" observedRunningTime="2026-02-26 14:41:05.520831606 +0000 UTC m=+1643.994152139" watchObservedRunningTime="2026-02-26 14:41:05.545908678 +0000 UTC m=+1644.019229201" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.546822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bfrw\" (UniqueName: \"kubernetes.io/projected/20736840-781c-4149-9398-481eb42d293b-kube-api-access-9bfrw\") pod \"neutron-6455866c87-pbhmh\" (UID: \"20736840-781c-4149-9398-481eb42d293b\") " pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:05 crc kubenswrapper[4809]: I0226 14:41:05.641104 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:06 crc kubenswrapper[4809]: I0226 14:41:06.374071 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6455866c87-pbhmh"] Feb 26 14:41:06 crc kubenswrapper[4809]: I0226 14:41:06.528546 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerID="f41224ede8364373b998c1f6c43517f71817bcb43e5191ba4251ebf8b2456236" exitCode=0 Feb 26 14:41:06 crc kubenswrapper[4809]: I0226 14:41:06.528656 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" event={"ID":"c1340cfd-c74a-46d2-ac15-b488ffc4a579","Type":"ContainerDied","Data":"f41224ede8364373b998c1f6c43517f71817bcb43e5191ba4251ebf8b2456236"} Feb 26 14:41:06 crc kubenswrapper[4809]: I0226 14:41:06.534717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6455866c87-pbhmh" event={"ID":"20736840-781c-4149-9398-481eb42d293b","Type":"ContainerStarted","Data":"79fa6874d218bf19d446a59eec918c165399e489e290f8aa9794984cecb859d8"} Feb 26 14:41:06 crc kubenswrapper[4809]: I0226 14:41:06.566633 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerStarted","Data":"38591e4eb5542f4971e7324d47e1bc5f751dd92e2b795b0a1bb67640bcc550dc"} Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.593060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" event={"ID":"c1340cfd-c74a-46d2-ac15-b488ffc4a579","Type":"ContainerStarted","Data":"0f8172d35fcec190dc83464937b7b77cc831dcf92f79518e9527b7ae36629190"} Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.594369 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.598735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6455866c87-pbhmh" event={"ID":"20736840-781c-4149-9398-481eb42d293b","Type":"ContainerStarted","Data":"6678bc1046f9deace72fb6746f9855a752b00827f680a29b790d349e560e7545"} Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.598785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6455866c87-pbhmh" event={"ID":"20736840-781c-4149-9398-481eb42d293b","Type":"ContainerStarted","Data":"d7b621dd3fdcf734cb99def17e2dc5119d8bc52d6ba6db8639d879619d87e369"} Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.598816 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.606272 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerStarted","Data":"48ce47a24f289d52f6572ea903dd7e94c903e9ce4b72aa3e109146cb0a2c2898"} Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.607479 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.619318 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podStartSLOduration=5.619302541 podStartE2EDuration="5.619302541s" podCreationTimestamp="2026-02-26 14:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:07.617590852 +0000 UTC m=+1646.090911375" watchObservedRunningTime="2026-02-26 14:41:07.619302541 +0000 UTC m=+1646.092623064" Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.667570 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f84bb7b56-576q9" podStartSLOduration=5.6675435400000005 podStartE2EDuration="5.66754354s" podCreationTimestamp="2026-02-26 14:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:07.638614059 +0000 UTC m=+1646.111934582" watchObservedRunningTime="2026-02-26 14:41:07.66754354 +0000 UTC m=+1646.140864063" Feb 26 14:41:07 crc kubenswrapper[4809]: I0226 14:41:07.683769 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6455866c87-pbhmh" podStartSLOduration=2.6837424 podStartE2EDuration="2.6837424s" podCreationTimestamp="2026-02-26 14:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:07.667031426 +0000 UTC m=+1646.140351959" watchObservedRunningTime="2026-02-26 14:41:07.6837424 +0000 UTC m=+1646.157062933" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.179142 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.179612 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-central-agent" containerID="cri-o://f81bcff20e2b62765648f64ff5526a26729017c403c307eb7b2eef5f41d360d4" gracePeriod=30 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.179756 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="proxy-httpd" containerID="cri-o://bc8342a3e828a5c979876227e1d13c982413de9d05a3c986824bf6ab5db373e3" gracePeriod=30 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.179796 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="sg-core" containerID="cri-o://a0b1a78fa0070b44aaf2fc3035ad2404387a97825086f49ce02092bb1ccb9262" gracePeriod=30 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.179827 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-notification-agent" containerID="cri-o://87a4ae544ed7684b8eedfe09829a2cd5d032fecb13092c2fe6c0ac13ffbc1176" gracePeriod=30 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636201 4809 generic.go:334] "Generic (PLEG): container finished" podID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerID="bc8342a3e828a5c979876227e1d13c982413de9d05a3c986824bf6ab5db373e3" exitCode=0 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636238 4809 generic.go:334] "Generic (PLEG): container finished" podID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerID="a0b1a78fa0070b44aaf2fc3035ad2404387a97825086f49ce02092bb1ccb9262" exitCode=2 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636250 4809 generic.go:334] "Generic (PLEG): container finished" podID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerID="87a4ae544ed7684b8eedfe09829a2cd5d032fecb13092c2fe6c0ac13ffbc1176" exitCode=0 Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636529 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerDied","Data":"bc8342a3e828a5c979876227e1d13c982413de9d05a3c986824bf6ab5db373e3"} Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636589 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerDied","Data":"a0b1a78fa0070b44aaf2fc3035ad2404387a97825086f49ce02092bb1ccb9262"} Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.636605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerDied","Data":"87a4ae544ed7684b8eedfe09829a2cd5d032fecb13092c2fe6c0ac13ffbc1176"} Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.885997 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5cf69889d9-nqp5q"] Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.888643 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.891244 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.891934 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.892090 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.910788 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cf69889d9-nqp5q"] Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.931968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-config-data\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932086 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-etc-swift\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932164 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-combined-ca-bundle\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932286 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qf8r\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-kube-api-access-7qf8r\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-internal-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932427 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-run-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932773 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-public-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:08 crc kubenswrapper[4809]: I0226 14:41:08.932978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-log-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034008 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-config-data\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-etc-swift\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034101 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-combined-ca-bundle\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034187 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qf8r\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-kube-api-access-7qf8r\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-internal-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034343 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-run-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-public-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034450 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-log-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.034960 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-run-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.035305 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-log-httpd\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.040396 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-etc-swift\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.040865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-config-data\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.041590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-combined-ca-bundle\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.042497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-internal-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.050669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-public-tls-certs\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.059426 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qf8r\" (UniqueName: \"kubernetes.io/projected/dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3-kube-api-access-7qf8r\") pod \"swift-proxy-5cf69889d9-nqp5q\" (UID: \"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3\") " pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.224857 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.322313 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.656390 4809 generic.go:334] "Generic (PLEG): container finished" podID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerID="f81bcff20e2b62765648f64ff5526a26729017c403c307eb7b2eef5f41d360d4" exitCode=0 Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.656684 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerDied","Data":"f81bcff20e2b62765648f64ff5526a26729017c403c307eb7b2eef5f41d360d4"} Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.656745 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"866fafde-caa3-46bd-bcbb-1361d47e7789","Type":"ContainerDied","Data":"427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d"} Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.656760 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="427e6c83626e9968883dfdae5f7acb4cb47df2f7c2c6fa1dfb2517e8dc2df77d" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.761463 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.966895 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.967788 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g44z\" (UniqueName: \"kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.968076 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.968108 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.968132 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.968162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.968175 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml\") pod \"866fafde-caa3-46bd-bcbb-1361d47e7789\" (UID: \"866fafde-caa3-46bd-bcbb-1361d47e7789\") " Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.969641 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.969732 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.976467 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z" (OuterVolumeSpecName: "kube-api-access-2g44z") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "kube-api-access-2g44z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.981305 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts" (OuterVolumeSpecName: "scripts") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:09 crc kubenswrapper[4809]: I0226 14:41:09.987225 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cf69889d9-nqp5q"] Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.015345 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.071657 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g44z\" (UniqueName: \"kubernetes.io/projected/866fafde-caa3-46bd-bcbb-1361d47e7789-kube-api-access-2g44z\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.071704 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.071717 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/866fafde-caa3-46bd-bcbb-1361d47e7789-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.071725 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.071734 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.093628 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data" (OuterVolumeSpecName: "config-data") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.102442 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "866fafde-caa3-46bd-bcbb-1361d47e7789" (UID: "866fafde-caa3-46bd-bcbb-1361d47e7789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.179084 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.179117 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866fafde-caa3-46bd-bcbb-1361d47e7789-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.365269 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.365350 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.667214 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.696967 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.714327 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.727573 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:10 crc kubenswrapper[4809]: E0226 14:41:10.728122 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="proxy-httpd" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728140 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="proxy-httpd" Feb 26 14:41:10 crc kubenswrapper[4809]: E0226 14:41:10.728157 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-central-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728165 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-central-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: E0226 14:41:10.728183 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="sg-core" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728189 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="sg-core" Feb 26 14:41:10 crc kubenswrapper[4809]: E0226 14:41:10.728199 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-notification-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728205 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-notification-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728425 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="sg-core" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728444 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="proxy-httpd" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728459 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-notification-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.728471 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" containerName="ceilometer-central-agent" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.730578 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.733570 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.734742 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.741201 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.745228 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:55686->10.217.0.206:9311: read: connection reset by peer" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.745216 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:55698->10.217.0.206:9311: read: connection reset by peer" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.891943 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87twx\" (UniqueName: \"kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.892703 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.892866 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.893092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.893189 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.893317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.893364 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.998792 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.998865 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.998904 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87twx\" (UniqueName: \"kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.998926 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.998966 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.999048 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.999146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:10 crc kubenswrapper[4809]: I0226 14:41:10.999381 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.005919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.006133 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.006295 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.010885 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.024898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.032801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87twx\" (UniqueName: \"kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx\") pod \"ceilometer-0\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.061929 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.188445 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.193793 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.201502 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.203132 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.205180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.210696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.203305 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hnpcv" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.209450 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.211435 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.211709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.314316 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.314423 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.314533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.315124 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.324167 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.329747 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.330486 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.338438 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.340132 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.347588 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.353686 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") pod \"heat-engine-56dc8f9c4c-xv6sb\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.402075 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.402850 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" containerID="cri-o://0f8172d35fcec190dc83464937b7b77cc831dcf92f79518e9527b7ae36629190" gracePeriod=10 Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.464120 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.526354 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.526747 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.527489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptsl\" (UniqueName: \"kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.527748 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.530202 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.532525 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.541272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.564195 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.601273 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.603304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.605171 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.625205 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630393 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630501 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfjx\" (UniqueName: \"kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630585 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630771 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630792 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.630976 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.631031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptsl\" (UniqueName: \"kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.631128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.637136 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.645789 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.678812 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptsl\" (UniqueName: \"kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.679983 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom\") pod \"heat-cfnapi-fc995f557-svpqf\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6lw\" (UniqueName: \"kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748342 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748386 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748412 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748531 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748554 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748596 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nfjx\" (UniqueName: \"kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748669 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.748707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.749522 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.751794 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.752277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.753181 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.753476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.753553 4809 generic.go:334] "Generic (PLEG): container finished" podID="6e242fdf-1367-4075-a023-a70b7cdde477" containerID="c2322e311c9fd623aacef35e3de6c90e000a13dabdc75a73e5bebb3af4be7af6" exitCode=0 Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.753609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerDied","Data":"c2322e311c9fd623aacef35e3de6c90e000a13dabdc75a73e5bebb3af4be7af6"} Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.767517 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerID="0f8172d35fcec190dc83464937b7b77cc831dcf92f79518e9527b7ae36629190" exitCode=0 Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.767562 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" event={"ID":"c1340cfd-c74a-46d2-ac15-b488ffc4a579","Type":"ContainerDied","Data":"0f8172d35fcec190dc83464937b7b77cc831dcf92f79518e9527b7ae36629190"} Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.779316 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nfjx\" (UniqueName: \"kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx\") pod \"dnsmasq-dns-5847c5b965-5f9r8\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.800302 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.816537 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:11 crc kubenswrapper[4809]: > Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.850887 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.851283 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.851488 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6lw\" (UniqueName: \"kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.851542 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.856673 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.857259 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.863023 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.874901 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6lw\" (UniqueName: \"kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw\") pod \"heat-api-7d495f5bbb-zwxps\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:11 crc kubenswrapper[4809]: I0226 14:41:11.926179 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:12 crc kubenswrapper[4809]: I0226 14:41:12.002146 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:12 crc kubenswrapper[4809]: I0226 14:41:12.271193 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866fafde-caa3-46bd-bcbb-1361d47e7789" path="/var/lib/kubelet/pods/866fafde-caa3-46bd-bcbb-1361d47e7789/volumes" Feb 26 14:41:12 crc kubenswrapper[4809]: I0226 14:41:12.685203 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: connect: connection refused" Feb 26 14:41:15 crc kubenswrapper[4809]: I0226 14:41:15.247256 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: connect: connection refused" Feb 26 14:41:15 crc kubenswrapper[4809]: I0226 14:41:15.247274 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: connect: connection refused" Feb 26 14:41:15 crc kubenswrapper[4809]: I0226 14:41:15.247855 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:41:15 crc kubenswrapper[4809]: I0226 14:41:15.248024 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.682501 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: connect: connection refused" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.757676 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.759711 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.795066 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.833250 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.835076 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.880608 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.906938 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.909765 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.919455 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.921723 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.921896 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.921938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922063 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxcc\" (UniqueName: \"kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922313 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxx9\" (UniqueName: \"kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922362 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922576 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7xc\" (UniqueName: \"kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:17 crc kubenswrapper[4809]: I0226 14:41:17.922734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.024875 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.024926 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.024974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxcc\" (UniqueName: \"kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcxx9\" (UniqueName: \"kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025124 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025282 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025376 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7xc\" (UniqueName: \"kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025399 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.025442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.032366 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.032514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.033518 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.034298 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.039159 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.040918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.056329 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.058031 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.058142 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxcc\" (UniqueName: \"kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.058828 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7xc\" (UniqueName: \"kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc\") pod \"heat-cfnapi-766d487f57-m2cst\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.059058 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcxx9\" (UniqueName: \"kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9\") pod \"heat-engine-5cdd964fc5-s4bsx\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.059128 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data\") pod \"heat-api-d7bcc9fd4-8k662\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.110171 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.158207 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.237118 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.846288 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.856085 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.915259 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.916898 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.920400 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.921356 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.960086 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.962160 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.968537 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 26 14:41:18 crc kubenswrapper[4809]: I0226 14:41:18.968795 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.021970 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054529 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054630 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054699 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054740 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054781 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054842 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfldc\" (UniqueName: \"kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054884 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6469j\" (UniqueName: \"kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.054905 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.055095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.098074 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157100 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157159 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157216 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157233 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157324 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157375 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfldc\" (UniqueName: \"kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157440 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6469j\" (UniqueName: \"kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157459 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157488 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.157531 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.162573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.162950 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.163888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.164640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.164946 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.165364 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.166552 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.166573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.169480 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.176022 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6469j\" (UniqueName: \"kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.176589 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfldc\" (UniqueName: \"kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc\") pod \"heat-cfnapi-54ff6f8d67-p4qrr\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.194933 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle\") pod \"heat-api-5b8c5684b6-nfc98\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.269449 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:19 crc kubenswrapper[4809]: I0226 14:41:19.322532 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:20 crc kubenswrapper[4809]: I0226 14:41:20.246495 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: connect: connection refused" Feb 26 14:41:20 crc kubenswrapper[4809]: I0226 14:41:20.246529 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: connect: connection refused" Feb 26 14:41:21 crc kubenswrapper[4809]: I0226 14:41:21.802200 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:21 crc kubenswrapper[4809]: > Feb 26 14:41:22 crc kubenswrapper[4809]: I0226 14:41:22.683672 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: connect: connection refused" Feb 26 14:41:22 crc kubenswrapper[4809]: W0226 14:41:22.700812 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4d3fe9_350f_4fc3_8cd2_6ea95162a0d3.slice/crio-a89b2ff9e02ee91995b06d8af3342c2971a271f4b4fae6df0a69cb9bfc767bce WatchSource:0}: Error finding container a89b2ff9e02ee91995b06d8af3342c2971a271f4b4fae6df0a69cb9bfc767bce: Status 404 returned error can't find the container with id a89b2ff9e02ee91995b06d8af3342c2971a271f4b4fae6df0a69cb9bfc767bce Feb 26 14:41:22 crc kubenswrapper[4809]: I0226 14:41:22.945407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cf69889d9-nqp5q" event={"ID":"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3","Type":"ContainerStarted","Data":"a89b2ff9e02ee91995b06d8af3342c2971a271f4b4fae6df0a69cb9bfc767bce"} Feb 26 14:41:23 crc kubenswrapper[4809]: E0226 14:41:23.469069 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 26 14:41:23 crc kubenswrapper[4809]: E0226 14:41:23.469244 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67chbfhf6h96h546h58dh85h75h58dhb4h554hbbh5bch5d5h5c4hb7h95hd4h75hdh5f5h56ch684h54dh556h68ch5c4h5cfh5dhd5h579h5d6q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6kl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(a79dbedd-3475-4279-9c37-9add895fd0e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:41:23 crc kubenswrapper[4809]: E0226 14:41:23.471241 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="a79dbedd-3475-4279-9c37-9add895fd0e1" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.655521 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.658378 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.686709 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.806963 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.807513 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r46gx\" (UniqueName: \"kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.807591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.910286 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r46gx\" (UniqueName: \"kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.910351 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.910498 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.911047 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.912044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.936126 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r46gx\" (UniqueName: \"kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx\") pod \"community-operators-6dkbf\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:24 crc kubenswrapper[4809]: I0226 14:41:24.992501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:29 crc kubenswrapper[4809]: E0226 14:41:29.157839 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="a79dbedd-3475-4279-9c37-9add895fd0e1" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.346276 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.358838 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420153 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle\") pod \"6e242fdf-1367-4075-a023-a70b7cdde477\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420257 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs\") pod \"6e242fdf-1367-4075-a023-a70b7cdde477\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420329 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data\") pod \"6e242fdf-1367-4075-a023-a70b7cdde477\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420370 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420442 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnbm2\" (UniqueName: \"kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2\") pod \"6e242fdf-1367-4075-a023-a70b7cdde477\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420470 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420563 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bnf\" (UniqueName: \"kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420606 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom\") pod \"6e242fdf-1367-4075-a023-a70b7cdde477\" (UID: \"6e242fdf-1367-4075-a023-a70b7cdde477\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420664 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420803 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.420870 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config\") pod \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\" (UID: \"c1340cfd-c74a-46d2-ac15-b488ffc4a579\") " Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.425584 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs" (OuterVolumeSpecName: "logs") pod "6e242fdf-1367-4075-a023-a70b7cdde477" (UID: "6e242fdf-1367-4075-a023-a70b7cdde477"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.435543 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2" (OuterVolumeSpecName: "kube-api-access-xnbm2") pod "6e242fdf-1367-4075-a023-a70b7cdde477" (UID: "6e242fdf-1367-4075-a023-a70b7cdde477"). InnerVolumeSpecName "kube-api-access-xnbm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.446287 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e242fdf-1367-4075-a023-a70b7cdde477" (UID: "6e242fdf-1367-4075-a023-a70b7cdde477"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.446820 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf" (OuterVolumeSpecName: "kube-api-access-l9bnf") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "kube-api-access-l9bnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.490077 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.523896 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnbm2\" (UniqueName: \"kubernetes.io/projected/6e242fdf-1367-4075-a023-a70b7cdde477-kube-api-access-xnbm2\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.523931 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9bnf\" (UniqueName: \"kubernetes.io/projected/c1340cfd-c74a-46d2-ac15-b488ffc4a579-kube-api-access-l9bnf\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.523941 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.523949 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e242fdf-1367-4075-a023-a70b7cdde477-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.545649 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.562114 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.589206 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e242fdf-1367-4075-a023-a70b7cdde477" (UID: "6e242fdf-1367-4075-a023-a70b7cdde477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.613944 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.626653 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.626695 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.626712 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.626728 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.653772 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.676720 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config" (OuterVolumeSpecName: "config") pod "c1340cfd-c74a-46d2-ac15-b488ffc4a579" (UID: "c1340cfd-c74a-46d2-ac15-b488ffc4a579"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.694759 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data" (OuterVolumeSpecName: "config-data") pod "6e242fdf-1367-4075-a023-a70b7cdde477" (UID: "6e242fdf-1367-4075-a023-a70b7cdde477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.729667 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.729711 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e242fdf-1367-4075-a023-a70b7cdde477-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:29 crc kubenswrapper[4809]: I0226 14:41:29.729726 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1340cfd-c74a-46d2-ac15-b488ffc4a579-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.026744 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" event={"ID":"c1340cfd-c74a-46d2-ac15-b488ffc4a579","Type":"ContainerDied","Data":"65b1473ed607e77935e087fb88a9ace7a27fe566d4649fc43955fdde0cc3d66c"} Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.027255 4809 scope.go:117] "RemoveContainer" containerID="0f8172d35fcec190dc83464937b7b77cc831dcf92f79518e9527b7ae36629190" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.027522 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.044026 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvqmb" event={"ID":"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2","Type":"ContainerStarted","Data":"9ca3c53f396b31dfb024619bd97b1b94332a30b51507917e758099d9b4810680"} Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.064701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-665d7899fd-v7m65" event={"ID":"6e242fdf-1367-4075-a023-a70b7cdde477","Type":"ContainerDied","Data":"392f8826395138476c4fe39565865e8a284afc3653ecb8acc915b84593c166a8"} Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.065276 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-665d7899fd-v7m65" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.118762 4809 scope.go:117] "RemoveContainer" containerID="f41224ede8364373b998c1f6c43517f71817bcb43e5191ba4251ebf8b2456236" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.252400 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.259171 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-665d7899fd-v7m65" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": dial tcp 10.217.0.206:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.293678 4809 scope.go:117] "RemoveContainer" containerID="c2322e311c9fd623aacef35e3de6c90e000a13dabdc75a73e5bebb3af4be7af6" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.360955 4809 scope.go:117] "RemoveContainer" containerID="8a888d4b56d8c54b88b03a5317f245a806845b64a731dc17cd811b19f376d062" Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.378069 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.397186 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-m72ml"] Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.401800 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.411842 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-665d7899fd-v7m65"] Feb 26 14:41:30 crc kubenswrapper[4809]: I0226 14:41:30.584456 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.096054 4809 generic.go:334] "Generic (PLEG): container finished" podID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerID="9ca3c53f396b31dfb024619bd97b1b94332a30b51507917e758099d9b4810680" exitCode=0 Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.096167 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvqmb" event={"ID":"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2","Type":"ContainerDied","Data":"9ca3c53f396b31dfb024619bd97b1b94332a30b51507917e758099d9b4810680"} Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.123693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cf69889d9-nqp5q" event={"ID":"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3","Type":"ContainerStarted","Data":"c1114a09c1e6a4519f2414470da94909425e45f6f5405b88d8e269cc6bb5fcac"} Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.124060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cf69889d9-nqp5q" event={"ID":"dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3","Type":"ContainerStarted","Data":"62dd5266b9c2a72a51e3a0d0c9b9ddeacfadf9954b76236e07f120b8520d4728"} Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.125981 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.126035 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.131690 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" event={"ID":"41f133cb-dc08-41e2-beeb-243ce04699a4","Type":"ContainerStarted","Data":"1a368b3eec2590360dcb7e1d2c6c08dab81937d52a293902bf6b4e314cc6dfd0"} Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.174412 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5cf69889d9-nqp5q" podStartSLOduration=23.174392052 podStartE2EDuration="23.174392052s" podCreationTimestamp="2026-02-26 14:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:31.157183094 +0000 UTC m=+1669.630503627" watchObservedRunningTime="2026-02-26 14:41:31.174392052 +0000 UTC m=+1669.647712575" Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.310724 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.336082 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.379429 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.397148 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.423273 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.443715 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.814988 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:31 crc kubenswrapper[4809]: > Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.840557 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.877529 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.887834 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:31 crc kubenswrapper[4809]: I0226 14:41:31.904029 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.141842 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" event={"ID":"d869783f-f6de-42fc-8e42-a628d4b11262","Type":"ContainerStarted","Data":"701de7f488cabd1c14b39cef7f344847cabcc67c0a7d3f5ec892d908d7c90644"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.144457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerStarted","Data":"d36d65466db4edb3be63d7eaf1c98aad2395282f8f1cd790d8042292acaf3aec"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.144483 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerStarted","Data":"de5cc63307f4c69e036452a711022072cc07d6790721368487fc75d32e377e99"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.146792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cdd964fc5-s4bsx" event={"ID":"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1","Type":"ContainerStarted","Data":"5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.146856 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cdd964fc5-s4bsx" event={"ID":"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1","Type":"ContainerStarted","Data":"61f51b68d27ebbd4a425b8a3eb438255b08ea35c21db09db488d47231ce08290"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.147136 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.148278 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d495f5bbb-zwxps" event={"ID":"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3","Type":"ContainerStarted","Data":"92702af601645fd2ed9e7223df09d53f82fcc59474134188725abc3f46ae1387"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.149902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" event={"ID":"41f133cb-dc08-41e2-beeb-243ce04699a4","Type":"ContainerStarted","Data":"55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.150044 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.154976 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerStarted","Data":"5c7fb3320f7eb4bacfdf689eaebab8d89aee99ed6ba268dd97a2225534862420"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.157414 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b8c5684b6-nfc98" event={"ID":"572ca251-6227-4c68-a2dc-b1a0161eb9d6","Type":"ContainerStarted","Data":"df1efd5e682cd1fa203efae2ed63c683085ee3b835a458ef7eea8189ab842624"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.159237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerStarted","Data":"5bb37c04d83d874447615857d903d650bfe995f75c3bd561f664095a0560f44a"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.161700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc995f557-svpqf" event={"ID":"5a182993-2e3c-464d-8d6c-e8d62b833f4b","Type":"ContainerStarted","Data":"b56e2509af52b89086ab8e460d7b5a79a6f6927964156048dc8af296fff5f450"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.163632 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b89wr" event={"ID":"ddf13b0e-9265-48c1-830b-8f0e59578fcf","Type":"ContainerStarted","Data":"b23687ee1125fe608d3e2e63998130bf767040e78c1dcf963247917d4da77d97"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.176151 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerStarted","Data":"63d61a1c0d2ab78624c6814535a3a370dbd3386297b8ca8be66447929e557980"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.180376 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5cdd964fc5-s4bsx" podStartSLOduration=15.180345685 podStartE2EDuration="15.180345685s" podCreationTimestamp="2026-02-26 14:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:32.176512437 +0000 UTC m=+1670.649832970" watchObservedRunningTime="2026-02-26 14:41:32.180345685 +0000 UTC m=+1670.653666228" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.180815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerStarted","Data":"306d13770b8d7496e89e321610104787ad95d8f5c3c4f0f9cd35409808a977b7"} Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.205962 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b89wr" podStartSLOduration=4.535470813 podStartE2EDuration="2m3.205939682s" podCreationTimestamp="2026-02-26 14:39:29 +0000 UTC" firstStartedPulling="2026-02-26 14:39:30.856662529 +0000 UTC m=+1549.329983052" lastFinishedPulling="2026-02-26 14:41:29.527131398 +0000 UTC m=+1668.000451921" observedRunningTime="2026-02-26 14:41:32.201778694 +0000 UTC m=+1670.675099217" watchObservedRunningTime="2026-02-26 14:41:32.205939682 +0000 UTC m=+1670.679260215" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.235725 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" podStartSLOduration=21.235698777 podStartE2EDuration="21.235698777s" podCreationTimestamp="2026-02-26 14:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:32.219674332 +0000 UTC m=+1670.692994855" watchObservedRunningTime="2026-02-26 14:41:32.235698777 +0000 UTC m=+1670.709019300" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.270862 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" path="/var/lib/kubelet/pods/6e242fdf-1367-4075-a023-a70b7cdde477/volumes" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.271708 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" path="/var/lib/kubelet/pods/c1340cfd-c74a-46d2-ac15-b488ffc4a579/volumes" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.682955 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-m72ml" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.211:5353: i/o timeout" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.755306 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6f84bb7b56-576q9" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.755306 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f84bb7b56-576q9" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:32 crc kubenswrapper[4809]: I0226 14:41:32.757567 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6f84bb7b56-576q9" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.198159 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerStarted","Data":"8158f171ab00d27cd4a45dbef9b58ae718388d18372b921752b680356ce019e1"} Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.199886 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvqmb" event={"ID":"5863bb93-7ab4-4326-b1fa-e4f1d5d920e2","Type":"ContainerStarted","Data":"403480c802b2de5ccfbd763f330bd8f59d1391982e1ad86f64918367fd5eab41"} Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.203940 4809 generic.go:334] "Generic (PLEG): container finished" podID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerID="3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b" exitCode=0 Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.203998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerDied","Data":"3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b"} Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.207863 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerID="d36d65466db4edb3be63d7eaf1c98aad2395282f8f1cd790d8042292acaf3aec" exitCode=0 Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.208734 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerDied","Data":"d36d65466db4edb3be63d7eaf1c98aad2395282f8f1cd790d8042292acaf3aec"} Feb 26 14:41:33 crc kubenswrapper[4809]: I0226 14:41:33.231885 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvqmb" podStartSLOduration=5.5750201409999995 podStartE2EDuration="42.231862143s" podCreationTimestamp="2026-02-26 14:40:51 +0000 UTC" firstStartedPulling="2026-02-26 14:40:55.681045027 +0000 UTC m=+1634.154365550" lastFinishedPulling="2026-02-26 14:41:32.337887029 +0000 UTC m=+1670.811207552" observedRunningTime="2026-02-26 14:41:33.222847077 +0000 UTC m=+1671.696167600" watchObservedRunningTime="2026-02-26 14:41:33.231862143 +0000 UTC m=+1671.705182666" Feb 26 14:41:34 crc kubenswrapper[4809]: I0226 14:41:34.227617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerStarted","Data":"130994327b5f0dedf6502820e7c91a22ce78c4ce4659dcffcdc409692d4b2195"} Feb 26 14:41:34 crc kubenswrapper[4809]: I0226 14:41:34.258908 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" podStartSLOduration=23.258882466 podStartE2EDuration="23.258882466s" podCreationTimestamp="2026-02-26 14:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:34.249328904 +0000 UTC m=+1672.722649447" watchObservedRunningTime="2026-02-26 14:41:34.258882466 +0000 UTC m=+1672.732202989" Feb 26 14:41:35 crc kubenswrapper[4809]: I0226 14:41:35.240256 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:35 crc kubenswrapper[4809]: I0226 14:41:35.652514 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6455866c87-pbhmh" podUID="20736840-781c-4149-9398-481eb42d293b" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:35 crc kubenswrapper[4809]: I0226 14:41:35.657240 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6455866c87-pbhmh" podUID="20736840-781c-4149-9398-481eb42d293b" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:35 crc kubenswrapper[4809]: I0226 14:41:35.657662 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6455866c87-pbhmh" podUID="20736840-781c-4149-9398-481eb42d293b" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.297150 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerStarted","Data":"b18454663391459697351affffb42f4a6d09adf7f4b544f9c238dce32e90001a"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.352146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" event={"ID":"d869783f-f6de-42fc-8e42-a628d4b11262","Type":"ContainerStarted","Data":"4146d83c51e3980648ffe05bce0df5553fe95e0d540c73b68ed51d17402dd07e"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.353814 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.461949 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerStarted","Data":"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.473586 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" podStartSLOduration=14.728615444999999 podStartE2EDuration="19.473571756s" podCreationTimestamp="2026-02-26 14:41:18 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.310302712 +0000 UTC m=+1669.783623235" lastFinishedPulling="2026-02-26 14:41:36.055259023 +0000 UTC m=+1674.528579546" observedRunningTime="2026-02-26 14:41:37.472733752 +0000 UTC m=+1675.946054275" watchObservedRunningTime="2026-02-26 14:41:37.473571756 +0000 UTC m=+1675.946892279" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.543442 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerStarted","Data":"af8378132f608760549d2eac30ef792b7839d9cc8d3c42f952e11fe9181eb5cd"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.543845 4809 scope.go:117] "RemoveContainer" containerID="af8378132f608760549d2eac30ef792b7839d9cc8d3c42f952e11fe9181eb5cd" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.573770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b8c5684b6-nfc98" event={"ID":"572ca251-6227-4c68-a2dc-b1a0161eb9d6","Type":"ContainerStarted","Data":"c11dd4f1fafd681e716d3e90db1f66e79d9958e4daa1c1d5c257a66f7645782c"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.574079 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.609302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerStarted","Data":"114d581bd40bd92690022c7589a75238ffd59d6917ccc9050ffe032185b0d533"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.609803 4809 scope.go:117] "RemoveContainer" containerID="114d581bd40bd92690022c7589a75238ffd59d6917ccc9050ffe032185b0d533" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.635120 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc995f557-svpqf" event={"ID":"5a182993-2e3c-464d-8d6c-e8d62b833f4b","Type":"ContainerStarted","Data":"8c20faff6d179f0dee41d012d48c7f50bc79b1e871a59043ca556e36c9c0f46b"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.635320 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-fc995f557-svpqf" podUID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" containerName="heat-cfnapi" containerID="cri-o://8c20faff6d179f0dee41d012d48c7f50bc79b1e871a59043ca556e36c9c0f46b" gracePeriod=60 Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.635681 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.664665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d495f5bbb-zwxps" event={"ID":"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3","Type":"ContainerStarted","Data":"69c8aa9f03750dff2c6b6dffa66a438a2a326fed2afe31ab623b68d270ed976b"} Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.664895 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7d495f5bbb-zwxps" podUID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" containerName="heat-api" containerID="cri-o://69c8aa9f03750dff2c6b6dffa66a438a2a326fed2afe31ab623b68d270ed976b" gracePeriod=60 Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.664996 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.701929 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b8c5684b6-nfc98" podStartSLOduration=15.570974842 podStartE2EDuration="19.701904919s" podCreationTimestamp="2026-02-26 14:41:18 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.924455279 +0000 UTC m=+1670.397775802" lastFinishedPulling="2026-02-26 14:41:36.055385356 +0000 UTC m=+1674.528705879" observedRunningTime="2026-02-26 14:41:37.620436886 +0000 UTC m=+1676.093757419" watchObservedRunningTime="2026-02-26 14:41:37.701904919 +0000 UTC m=+1676.175225442" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.832666 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fc995f557-svpqf" podStartSLOduration=22.697805935 podStartE2EDuration="26.832648232s" podCreationTimestamp="2026-02-26 14:41:11 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.909244978 +0000 UTC m=+1670.382565501" lastFinishedPulling="2026-02-26 14:41:36.044087275 +0000 UTC m=+1674.517407798" observedRunningTime="2026-02-26 14:41:37.72164812 +0000 UTC m=+1676.194968643" watchObservedRunningTime="2026-02-26 14:41:37.832648232 +0000 UTC m=+1676.305968745" Feb 26 14:41:37 crc kubenswrapper[4809]: I0226 14:41:37.844473 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d495f5bbb-zwxps" podStartSLOduration=22.098048465 podStartE2EDuration="26.844448697s" podCreationTimestamp="2026-02-26 14:41:11 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.310661812 +0000 UTC m=+1669.783982335" lastFinishedPulling="2026-02-26 14:41:36.057062044 +0000 UTC m=+1674.530382567" observedRunningTime="2026-02-26 14:41:37.754776801 +0000 UTC m=+1676.228097324" watchObservedRunningTime="2026-02-26 14:41:37.844448697 +0000 UTC m=+1676.317769220" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.158815 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.158865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.237415 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.237455 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.679658 4809 generic.go:334] "Generic (PLEG): container finished" podID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerID="af8378132f608760549d2eac30ef792b7839d9cc8d3c42f952e11fe9181eb5cd" exitCode=1 Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.679865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerDied","Data":"af8378132f608760549d2eac30ef792b7839d9cc8d3c42f952e11fe9181eb5cd"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.680086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerStarted","Data":"4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.680116 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.682593 4809 generic.go:334] "Generic (PLEG): container finished" podID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerID="114d581bd40bd92690022c7589a75238ffd59d6917ccc9050ffe032185b0d533" exitCode=1 Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.682654 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerDied","Data":"114d581bd40bd92690022c7589a75238ffd59d6917ccc9050ffe032185b0d533"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.682681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerStarted","Data":"aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.683257 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.685478 4809 generic.go:334] "Generic (PLEG): container finished" podID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" containerID="8c20faff6d179f0dee41d012d48c7f50bc79b1e871a59043ca556e36c9c0f46b" exitCode=0 Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.685544 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc995f557-svpqf" event={"ID":"5a182993-2e3c-464d-8d6c-e8d62b833f4b","Type":"ContainerDied","Data":"8c20faff6d179f0dee41d012d48c7f50bc79b1e871a59043ca556e36c9c0f46b"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.688039 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" containerID="69c8aa9f03750dff2c6b6dffa66a438a2a326fed2afe31ab623b68d270ed976b" exitCode=0 Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.688115 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d495f5bbb-zwxps" event={"ID":"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3","Type":"ContainerDied","Data":"69c8aa9f03750dff2c6b6dffa66a438a2a326fed2afe31ab623b68d270ed976b"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.691989 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerStarted","Data":"3feeb7e0824f63340e6903846f3642f321dc337220b34da7b20a7fd960903db4"} Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.709510 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-d7bcc9fd4-8k662" podStartSLOduration=17.520498763 podStartE2EDuration="21.709490379s" podCreationTimestamp="2026-02-26 14:41:17 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.862261883 +0000 UTC m=+1670.335582416" lastFinishedPulling="2026-02-26 14:41:36.051253509 +0000 UTC m=+1674.524574032" observedRunningTime="2026-02-26 14:41:38.707166253 +0000 UTC m=+1677.180486776" watchObservedRunningTime="2026-02-26 14:41:38.709490379 +0000 UTC m=+1677.182810902" Feb 26 14:41:38 crc kubenswrapper[4809]: I0226 14:41:38.746196 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-766d487f57-m2cst" podStartSLOduration=16.999449678 podStartE2EDuration="21.74617682s" podCreationTimestamp="2026-02-26 14:41:17 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.310436745 +0000 UTC m=+1669.783757268" lastFinishedPulling="2026-02-26 14:41:36.057163887 +0000 UTC m=+1674.530484410" observedRunningTime="2026-02-26 14:41:38.730435393 +0000 UTC m=+1677.203755926" watchObservedRunningTime="2026-02-26 14:41:38.74617682 +0000 UTC m=+1677.219497343" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.243308 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.243686 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cf69889d9-nqp5q" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.320150 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.375763 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data\") pod \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.376094 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6lw\" (UniqueName: \"kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw\") pod \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.376174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom\") pod \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.376209 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle\") pod \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\" (UID: \"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.383701 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw" (OuterVolumeSpecName: "kube-api-access-5j6lw") pod "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" (UID: "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3"). InnerVolumeSpecName "kube-api-access-5j6lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.390125 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" (UID: "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.442508 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" (UID: "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.475615 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data" (OuterVolumeSpecName: "config-data") pod "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" (UID: "0a3a9762-ab24-4cbb-ac3a-7efaa74035a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.479600 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.479630 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.479639 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6lw\" (UniqueName: \"kubernetes.io/projected/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-kube-api-access-5j6lw\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.479677 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.589414 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.683208 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kptsl\" (UniqueName: \"kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl\") pod \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.683428 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle\") pod \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.683516 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data\") pod \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.683583 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom\") pod \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\" (UID: \"5a182993-2e3c-464d-8d6c-e8d62b833f4b\") " Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.689071 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a182993-2e3c-464d-8d6c-e8d62b833f4b" (UID: "5a182993-2e3c-464d-8d6c-e8d62b833f4b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.737450 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl" (OuterVolumeSpecName: "kube-api-access-kptsl") pod "5a182993-2e3c-464d-8d6c-e8d62b833f4b" (UID: "5a182993-2e3c-464d-8d6c-e8d62b833f4b"). InnerVolumeSpecName "kube-api-access-kptsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.745883 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a182993-2e3c-464d-8d6c-e8d62b833f4b" (UID: "5a182993-2e3c-464d-8d6c-e8d62b833f4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.759668 4809 generic.go:334] "Generic (PLEG): container finished" podID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerID="aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143" exitCode=1 Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.759729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerDied","Data":"aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143"} Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.759761 4809 scope.go:117] "RemoveContainer" containerID="114d581bd40bd92690022c7589a75238ffd59d6917ccc9050ffe032185b0d533" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.760559 4809 scope.go:117] "RemoveContainer" containerID="aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143" Feb 26 14:41:39 crc kubenswrapper[4809]: E0226 14:41:39.760868 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-766d487f57-m2cst_openstack(86d9f926-4bf5-4517-aff3-390eb57d6dbb)\"" pod="openstack/heat-cfnapi-766d487f57-m2cst" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.768811 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc995f557-svpqf" event={"ID":"5a182993-2e3c-464d-8d6c-e8d62b833f4b","Type":"ContainerDied","Data":"b56e2509af52b89086ab8e460d7b5a79a6f6927964156048dc8af296fff5f450"} Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.768900 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc995f557-svpqf" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.772302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d495f5bbb-zwxps" event={"ID":"0a3a9762-ab24-4cbb-ac3a-7efaa74035a3","Type":"ContainerDied","Data":"92702af601645fd2ed9e7223df09d53f82fcc59474134188725abc3f46ae1387"} Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.772382 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d495f5bbb-zwxps" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.780041 4809 generic.go:334] "Generic (PLEG): container finished" podID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerID="5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0" exitCode=0 Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.780116 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerDied","Data":"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0"} Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.789503 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.789551 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.789565 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kptsl\" (UniqueName: \"kubernetes.io/projected/5a182993-2e3c-464d-8d6c-e8d62b833f4b-kube-api-access-kptsl\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.789899 4809 generic.go:334] "Generic (PLEG): container finished" podID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerID="4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5" exitCode=1 Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.790044 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerDied","Data":"4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5"} Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.790810 4809 scope.go:117] "RemoveContainer" containerID="4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5" Feb 26 14:41:39 crc kubenswrapper[4809]: E0226 14:41:39.791195 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-d7bcc9fd4-8k662_openstack(f2082c9f-a7b4-44c8-9737-e55e5cf2a841)\"" pod="openstack/heat-api-d7bcc9fd4-8k662" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.830983 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data" (OuterVolumeSpecName: "config-data") pod "5a182993-2e3c-464d-8d6c-e8d62b833f4b" (UID: "5a182993-2e3c-464d-8d6c-e8d62b833f4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.875114 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.887520 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7d495f5bbb-zwxps"] Feb 26 14:41:39 crc kubenswrapper[4809]: I0226 14:41:39.893331 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a182993-2e3c-464d-8d6c-e8d62b833f4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.102607 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.117779 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-fc995f557-svpqf"] Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.269369 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" path="/var/lib/kubelet/pods/0a3a9762-ab24-4cbb-ac3a-7efaa74035a3/volumes" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.270124 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" path="/var/lib/kubelet/pods/5a182993-2e3c-464d-8d6c-e8d62b833f4b/volumes" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.423505 4809 scope.go:117] "RemoveContainer" containerID="8c20faff6d179f0dee41d012d48c7f50bc79b1e871a59043ca556e36c9c0f46b" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.445965 4809 scope.go:117] "RemoveContainer" containerID="69c8aa9f03750dff2c6b6dffa66a438a2a326fed2afe31ab623b68d270ed976b" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.470911 4809 scope.go:117] "RemoveContainer" containerID="af8378132f608760549d2eac30ef792b7839d9cc8d3c42f952e11fe9181eb5cd" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.806840 4809 scope.go:117] "RemoveContainer" containerID="aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143" Feb 26 14:41:40 crc kubenswrapper[4809]: E0226 14:41:40.807184 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-766d487f57-m2cst_openstack(86d9f926-4bf5-4517-aff3-390eb57d6dbb)\"" pod="openstack/heat-cfnapi-766d487f57-m2cst" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" Feb 26 14:41:40 crc kubenswrapper[4809]: I0226 14:41:40.811518 4809 scope.go:117] "RemoveContainer" containerID="4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5" Feb 26 14:41:40 crc kubenswrapper[4809]: E0226 14:41:40.811894 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-d7bcc9fd4-8k662_openstack(f2082c9f-a7b4-44c8-9737-e55e5cf2a841)\"" pod="openstack/heat-api-d7bcc9fd4-8k662" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" Feb 26 14:41:41 crc kubenswrapper[4809]: I0226 14:41:41.643921 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:41 crc kubenswrapper[4809]: I0226 14:41:41.799995 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:41 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:41 crc kubenswrapper[4809]: > Feb 26 14:41:41 crc kubenswrapper[4809]: I0226 14:41:41.931574 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:41:41 crc kubenswrapper[4809]: I0226 14:41:41.951518 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:41:41 crc kubenswrapper[4809]: I0226 14:41:41.951575 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:41:42 crc kubenswrapper[4809]: I0226 14:41:42.026584 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:41:42 crc kubenswrapper[4809]: I0226 14:41:42.026875 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="dnsmasq-dns" containerID="cri-o://22451bf8e50243eb6c84a6b6617932505ef476c54436fecef2b95cf294ef3fe1" gracePeriod=10 Feb 26 14:41:42 crc kubenswrapper[4809]: I0226 14:41:42.849861 4809 generic.go:334] "Generic (PLEG): container finished" podID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerID="22451bf8e50243eb6c84a6b6617932505ef476c54436fecef2b95cf294ef3fe1" exitCode=0 Feb 26 14:41:42 crc kubenswrapper[4809]: I0226 14:41:42.850429 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerDied","Data":"22451bf8e50243eb6c84a6b6617932505ef476c54436fecef2b95cf294ef3fe1"} Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.158542 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.159366 4809 scope.go:117] "RemoveContainer" containerID="aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143" Feb 26 14:41:43 crc kubenswrapper[4809]: E0226 14:41:43.159700 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-766d487f57-m2cst_openstack(86d9f926-4bf5-4517-aff3-390eb57d6dbb)\"" pod="openstack/heat-cfnapi-766d487f57-m2cst" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.221960 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.237438 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.238397 4809 scope.go:117] "RemoveContainer" containerID="4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5" Feb 26 14:41:43 crc kubenswrapper[4809]: E0226 14:41:43.238712 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-d7bcc9fd4-8k662_openstack(f2082c9f-a7b4-44c8-9737-e55e5cf2a841)\"" pod="openstack/heat-api-d7bcc9fd4-8k662" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.305538 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.305772 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.305930 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.305966 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9xnh\" (UniqueName: \"kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.306061 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.306149 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb\") pod \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\" (UID: \"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd\") " Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.318058 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh" (OuterVolumeSpecName: "kube-api-access-t9xnh") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "kube-api-access-t9xnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.405202 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.410725 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config" (OuterVolumeSpecName: "config") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.411642 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9xnh\" (UniqueName: \"kubernetes.io/projected/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-kube-api-access-t9xnh\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.411671 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.411683 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.420618 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.429510 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.444971 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" (UID: "b2aabbfa-cdfb-4b2e-929c-362abd1f61bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.513741 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.513778 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.513791 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.593433 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:43 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:43 crc kubenswrapper[4809]: > Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.883319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" event={"ID":"b2aabbfa-cdfb-4b2e-929c-362abd1f61bd","Type":"ContainerDied","Data":"ecfebfb19468c16c47d026fb2cbe1dd98938edd78e7623337c0573677aaa8879"} Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.883913 4809 scope.go:117] "RemoveContainer" containerID="22451bf8e50243eb6c84a6b6617932505ef476c54436fecef2b95cf294ef3fe1" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.883821 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-jnql5" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.892346 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerStarted","Data":"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4"} Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.925153 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dkbf" podStartSLOduration=10.520564446 podStartE2EDuration="19.925130066s" podCreationTimestamp="2026-02-26 14:41:24 +0000 UTC" firstStartedPulling="2026-02-26 14:41:33.206714879 +0000 UTC m=+1671.680035402" lastFinishedPulling="2026-02-26 14:41:42.611280499 +0000 UTC m=+1681.084601022" observedRunningTime="2026-02-26 14:41:43.913084484 +0000 UTC m=+1682.386405017" watchObservedRunningTime="2026-02-26 14:41:43.925130066 +0000 UTC m=+1682.398450589" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.974734 4809 scope.go:117] "RemoveContainer" containerID="30fb366d73131480563047ff4964711bdaab2db3c6a45c739bd5652cf3ce9e7d" Feb 26 14:41:43 crc kubenswrapper[4809]: I0226 14:41:43.990060 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.011454 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-jnql5"] Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.279077 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" path="/var/lib/kubelet/pods/b2aabbfa-cdfb-4b2e-929c-362abd1f61bd/volumes" Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.908524 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a79dbedd-3475-4279-9c37-9add895fd0e1","Type":"ContainerStarted","Data":"454bafecc75cc02ee64a7993291f6d567a1d4a8b2dcf6faab58a6cb1fb38f73e"} Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.964655 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.774857974 podStartE2EDuration="46.964628702s" podCreationTimestamp="2026-02-26 14:40:58 +0000 UTC" firstStartedPulling="2026-02-26 14:40:59.655168621 +0000 UTC m=+1638.128489144" lastFinishedPulling="2026-02-26 14:41:43.844939349 +0000 UTC m=+1682.318259872" observedRunningTime="2026-02-26 14:41:44.926900241 +0000 UTC m=+1683.400220764" watchObservedRunningTime="2026-02-26 14:41:44.964628702 +0000 UTC m=+1683.437949225" Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.993480 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:44 crc kubenswrapper[4809]: I0226 14:41:44.995340 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.932323 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-central-agent" containerID="cri-o://8158f171ab00d27cd4a45dbef9b58ae718388d18372b921752b680356ce019e1" gracePeriod=30 Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.933136 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerStarted","Data":"18b846f463bf1e1e6f69070b3a68841650fd884e7bd375ecacf1b0ba2fe5d5ba"} Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.933198 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.933639 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="proxy-httpd" containerID="cri-o://18b846f463bf1e1e6f69070b3a68841650fd884e7bd375ecacf1b0ba2fe5d5ba" gracePeriod=30 Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.933709 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="sg-core" containerID="cri-o://3feeb7e0824f63340e6903846f3642f321dc337220b34da7b20a7fd960903db4" gracePeriod=30 Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.933769 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-notification-agent" containerID="cri-o://b18454663391459697351affffb42f4a6d09adf7f4b544f9c238dce32e90001a" gracePeriod=30 Feb 26 14:41:45 crc kubenswrapper[4809]: I0226 14:41:45.969978 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=22.409629137 podStartE2EDuration="35.969955178s" podCreationTimestamp="2026-02-26 14:41:10 +0000 UTC" firstStartedPulling="2026-02-26 14:41:31.342664561 +0000 UTC m=+1669.815985084" lastFinishedPulling="2026-02-26 14:41:44.902990602 +0000 UTC m=+1683.376311125" observedRunningTime="2026-02-26 14:41:45.961072646 +0000 UTC m=+1684.434393179" watchObservedRunningTime="2026-02-26 14:41:45.969955178 +0000 UTC m=+1684.443275701" Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.102698 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6dkbf" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:46 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:46 crc kubenswrapper[4809]: > Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.613727 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.705760 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960500 4809 generic.go:334] "Generic (PLEG): container finished" podID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerID="3feeb7e0824f63340e6903846f3642f321dc337220b34da7b20a7fd960903db4" exitCode=2 Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960865 4809 generic.go:334] "Generic (PLEG): container finished" podID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerID="b18454663391459697351affffb42f4a6d09adf7f4b544f9c238dce32e90001a" exitCode=0 Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960877 4809 generic.go:334] "Generic (PLEG): container finished" podID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerID="8158f171ab00d27cd4a45dbef9b58ae718388d18372b921752b680356ce019e1" exitCode=0 Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerDied","Data":"3feeb7e0824f63340e6903846f3642f321dc337220b34da7b20a7fd960903db4"} Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerDied","Data":"b18454663391459697351affffb42f4a6d09adf7f4b544f9c238dce32e90001a"} Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.960977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerDied","Data":"8158f171ab00d27cd4a45dbef9b58ae718388d18372b921752b680356ce019e1"} Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.963381 4809 generic.go:334] "Generic (PLEG): container finished" podID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" containerID="b23687ee1125fe608d3e2e63998130bf767040e78c1dcf963247917d4da77d97" exitCode=0 Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.963407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b89wr" event={"ID":"ddf13b0e-9265-48c1-830b-8f0e59578fcf","Type":"ContainerDied","Data":"b23687ee1125fe608d3e2e63998130bf767040e78c1dcf963247917d4da77d97"} Feb 26 14:41:46 crc kubenswrapper[4809]: I0226 14:41:46.978079 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.061984 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.512592 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.534069 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626468 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle\") pod \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626724 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom\") pod \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7xc\" (UniqueName: \"kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc\") pod \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626861 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle\") pod \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626913 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom\") pod \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.626964 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data\") pod \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\" (UID: \"86d9f926-4bf5-4517-aff3-390eb57d6dbb\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.627057 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data\") pod \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.627391 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxcc\" (UniqueName: \"kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc\") pod \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\" (UID: \"f2082c9f-a7b4-44c8-9737-e55e5cf2a841\") " Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.633167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc" (OuterVolumeSpecName: "kube-api-access-cvxcc") pod "f2082c9f-a7b4-44c8-9737-e55e5cf2a841" (UID: "f2082c9f-a7b4-44c8-9737-e55e5cf2a841"). InnerVolumeSpecName "kube-api-access-cvxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.633671 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc" (OuterVolumeSpecName: "kube-api-access-jt7xc") pod "86d9f926-4bf5-4517-aff3-390eb57d6dbb" (UID: "86d9f926-4bf5-4517-aff3-390eb57d6dbb"). InnerVolumeSpecName "kube-api-access-jt7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.634236 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2082c9f-a7b4-44c8-9737-e55e5cf2a841" (UID: "f2082c9f-a7b4-44c8-9737-e55e5cf2a841"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.652319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86d9f926-4bf5-4517-aff3-390eb57d6dbb" (UID: "86d9f926-4bf5-4517-aff3-390eb57d6dbb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.671613 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86d9f926-4bf5-4517-aff3-390eb57d6dbb" (UID: "86d9f926-4bf5-4517-aff3-390eb57d6dbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.685877 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2082c9f-a7b4-44c8-9737-e55e5cf2a841" (UID: "f2082c9f-a7b4-44c8-9737-e55e5cf2a841"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.709216 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data" (OuterVolumeSpecName: "config-data") pod "86d9f926-4bf5-4517-aff3-390eb57d6dbb" (UID: "86d9f926-4bf5-4517-aff3-390eb57d6dbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.709625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data" (OuterVolumeSpecName: "config-data") pod "f2082c9f-a7b4-44c8-9737-e55e5cf2a841" (UID: "f2082c9f-a7b4-44c8-9737-e55e5cf2a841"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.730953 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731029 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731045 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7xc\" (UniqueName: \"kubernetes.io/projected/86d9f926-4bf5-4517-aff3-390eb57d6dbb-kube-api-access-jt7xc\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731060 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731075 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731086 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d9f926-4bf5-4517-aff3-390eb57d6dbb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731097 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.731108 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxcc\" (UniqueName: \"kubernetes.io/projected/f2082c9f-a7b4-44c8-9737-e55e5cf2a841-kube-api-access-cvxcc\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.975195 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7bcc9fd4-8k662" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.976240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7bcc9fd4-8k662" event={"ID":"f2082c9f-a7b4-44c8-9737-e55e5cf2a841","Type":"ContainerDied","Data":"306d13770b8d7496e89e321610104787ad95d8f5c3c4f0f9cd35409808a977b7"} Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.976293 4809 scope.go:117] "RemoveContainer" containerID="4233dfae666c04edc88ff51e1f1d05e177ecfc609d932c8989f6c2e4f8b051e5" Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.979865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-766d487f57-m2cst" event={"ID":"86d9f926-4bf5-4517-aff3-390eb57d6dbb","Type":"ContainerDied","Data":"5bb37c04d83d874447615857d903d650bfe995f75c3bd561f664095a0560f44a"} Feb 26 14:41:47 crc kubenswrapper[4809]: I0226 14:41:47.979910 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-766d487f57-m2cst" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.030049 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.030582 4809 scope.go:117] "RemoveContainer" containerID="aeb76944ad19f11aa860a432ee3ba35ae8a5ed4084d939c37133c1ffa3e49143" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.047225 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-d7bcc9fd4-8k662"] Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.059513 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.079629 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-766d487f57-m2cst"] Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.179062 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.244389 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.244963 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerName="heat-engine" containerID="cri-o://55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" gracePeriod=60 Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.272895 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" path="/var/lib/kubelet/pods/86d9f926-4bf5-4517-aff3-390eb57d6dbb/volumes" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.273799 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" path="/var/lib/kubelet/pods/f2082c9f-a7b4-44c8-9737-e55e5cf2a841/volumes" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.532732 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b89wr" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.556995 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557099 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557178 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92sm\" (UniqueName: \"kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557259 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557359 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts\") pod \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\" (UID: \"ddf13b0e-9265-48c1-830b-8f0e59578fcf\") " Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557456 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.557838 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddf13b0e-9265-48c1-830b-8f0e59578fcf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.569935 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts" (OuterVolumeSpecName: "scripts") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.574709 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm" (OuterVolumeSpecName: "kube-api-access-r92sm") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "kube-api-access-r92sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.579944 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.612469 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.658162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data" (OuterVolumeSpecName: "config-data") pod "ddf13b0e-9265-48c1-830b-8f0e59578fcf" (UID: "ddf13b0e-9265-48c1-830b-8f0e59578fcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.659648 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.659672 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.659681 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.659688 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddf13b0e-9265-48c1-830b-8f0e59578fcf-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.659696 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92sm\" (UniqueName: \"kubernetes.io/projected/ddf13b0e-9265-48c1-830b-8f0e59578fcf-kube-api-access-r92sm\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.992320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b89wr" event={"ID":"ddf13b0e-9265-48c1-830b-8f0e59578fcf","Type":"ContainerDied","Data":"1ddd2a94dd4acb2e8e6c8b58655d8a4d7709823531b22848a9ee447134369702"} Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.992359 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddd2a94dd4acb2e8e6c8b58655d8a4d7709823531b22848a9ee447134369702" Feb 26 14:41:48 crc kubenswrapper[4809]: I0226 14:41:48.992325 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b89wr" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.351946 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.352897 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.352917 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.352938 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" containerName="cinder-db-sync" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.352945 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" containerName="cinder-db-sync" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.352970 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.352978 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.352992 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="init" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353000 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="init" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353028 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353036 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353050 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353059 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353091 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="init" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353099 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="init" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353109 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353116 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353132 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353140 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353159 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353167 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.353189 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.353197 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358608 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358665 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api-log" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358691 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358706 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3a9762-ab24-4cbb-ac3a-7efaa74035a3" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358746 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1340cfd-c74a-46d2-ac15-b488ffc4a579" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358773 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358972 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a182993-2e3c-464d-8d6c-e8d62b833f4b" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.358994 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e242fdf-1367-4075-a023-a70b7cdde477" containerName="barbican-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.359026 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2aabbfa-cdfb-4b2e-929c-362abd1f61bd" containerName="dnsmasq-dns" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.359049 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" containerName="cinder-db-sync" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.359497 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.360902 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: E0226 14:41:49.360955 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.360965 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2082c9f-a7b4-44c8-9737-e55e5cf2a841" containerName="heat-api" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.361434 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d9f926-4bf5-4517-aff3-390eb57d6dbb" containerName="heat-cfnapi" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.363791 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.365835 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.373677 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.374126 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.374313 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qqnbq" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.374634 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380645 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380710 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380818 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380893 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.380985 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2cw\" (UniqueName: \"kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.494487 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.494572 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.496798 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.497084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.497212 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.497508 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2cw\" (UniqueName: \"kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.498106 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.501600 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.498158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.509088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.516100 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.527919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.533418 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2cw\" (UniqueName: \"kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.539607 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.636088 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.641984 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.642220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h69\" (UniqueName: \"kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.642306 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.642510 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.642800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.642855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.746269 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.747659 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.747884 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.748040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.748085 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.748164 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.748241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7h69\" (UniqueName: \"kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.749490 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.750191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.750800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.750819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.754793 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.772257 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.772531 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-log" containerID="cri-o://b93f8f4a9d43b5da6538855925f643937ae169dddf139b68bf41ca41edc8ea54" gracePeriod=30 Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.774169 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-httpd" containerID="cri-o://ffe71b0048809032c3a86b2d95a3454513fa7981ff8429d870bad86f6812a6d2" gracePeriod=30 Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.812427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7h69\" (UniqueName: \"kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69\") pod \"dnsmasq-dns-f6bc4c6c9-xqg8z\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.899287 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.901434 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.906338 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.975723 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.975852 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.975895 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.975948 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.976037 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.976330 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzm5\" (UniqueName: \"kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.976379 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.980074 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:49 crc kubenswrapper[4809]: I0226 14:41:49.991816 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.078770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.078871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.078984 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzm5\" (UniqueName: \"kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.079032 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.079130 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.079212 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.079237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.081211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.082434 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.093631 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.094540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.094902 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.095826 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.113626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzm5\" (UniqueName: \"kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5\") pod \"cinder-api-0\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.265167 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.616186 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:41:50 crc kubenswrapper[4809]: I0226 14:41:50.832227 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.072030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerStarted","Data":"dea5d4dd5899312b8bbf9e2533eaf1d59c8c7c70ae0e72ec7be7bb813e90a4fb"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.088938 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" event={"ID":"56890ecc-238d-4b33-b0cc-67c8a5831266","Type":"ContainerStarted","Data":"8d26a9eaf2f1e569580620f0481e333e552cb9aa5d2661ea671f6c326c11e841"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.107659 4809 generic.go:334] "Generic (PLEG): container finished" podID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerID="b93f8f4a9d43b5da6538855925f643937ae169dddf139b68bf41ca41edc8ea54" exitCode=143 Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.107705 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerDied","Data":"b93f8f4a9d43b5da6538855925f643937ae169dddf139b68bf41ca41edc8ea54"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.133983 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:53 crc kubenswrapper[4809]: E0226 14:41:51.555859 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:41:53 crc kubenswrapper[4809]: E0226 14:41:51.565140 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:41:53 crc kubenswrapper[4809]: E0226 14:41:51.581702 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:41:53 crc kubenswrapper[4809]: E0226 14:41:51.581778 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerName="heat-engine" Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.839532 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:53 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:53 crc kubenswrapper[4809]: > Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:51.914535 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:52.125723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerStarted","Data":"c3effd6fe10140f5e9773f762adece3b2f49303fd3c98d80024f11b0980ad02e"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:52.132637 4809 generic.go:334] "Generic (PLEG): container finished" podID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerID="5a1df1317b4f209bb4f84b463b5a8ae3077a521d95e1d0265f7abcbb28f891aa" exitCode=0 Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:52.132686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" event={"ID":"56890ecc-238d-4b33-b0cc-67c8a5831266","Type":"ContainerDied","Data":"5a1df1317b4f209bb4f84b463b5a8ae3077a521d95e1d0265f7abcbb28f891aa"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.039398 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 14:41:53 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:41:53 crc kubenswrapper[4809]: > Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.157469 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerStarted","Data":"ae9a34fbd75beecd2367a9c4a9d5febadcbaab777069683e36cc5932f1ab15e6"} Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.401157 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.401359 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-log" containerID="cri-o://8549972de902dfac038d362fa4c9f8ae04a7dacfc0868f75a91ef1c9e089a614" gracePeriod=30 Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.401848 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-httpd" containerID="cri-o://79a9eebe02422f3d3a7746a343b67bd18a589ac6384fdd2f0ca7b94fd5ce302b" gracePeriod=30 Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.561778 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9292/healthcheck\": read tcp 10.217.0.2:47326->10.217.0.201:9292: read: connection reset by peer" Feb 26 14:41:53 crc kubenswrapper[4809]: I0226 14:41:53.562169 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.201:9292/healthcheck\": read tcp 10.217.0.2:47312->10.217.0.201:9292: read: connection reset by peer" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.224052 4809 generic.go:334] "Generic (PLEG): container finished" podID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerID="ffe71b0048809032c3a86b2d95a3454513fa7981ff8429d870bad86f6812a6d2" exitCode=0 Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.224351 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerDied","Data":"ffe71b0048809032c3a86b2d95a3454513fa7981ff8429d870bad86f6812a6d2"} Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.242236 4809 generic.go:334] "Generic (PLEG): container finished" podID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerID="8549972de902dfac038d362fa4c9f8ae04a7dacfc0868f75a91ef1c9e089a614" exitCode=143 Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.242339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerDied","Data":"8549972de902dfac038d362fa4c9f8ae04a7dacfc0868f75a91ef1c9e089a614"} Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.246523 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" event={"ID":"56890ecc-238d-4b33-b0cc-67c8a5831266","Type":"ContainerStarted","Data":"22e799992c42e9ec952a01a52a059a4590d1eeee391ae6e138b67a0c84af4deb"} Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.248948 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.278167 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" podStartSLOduration=5.278142987 podStartE2EDuration="5.278142987s" podCreationTimestamp="2026-02-26 14:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:54.26837458 +0000 UTC m=+1692.741695113" watchObservedRunningTime="2026-02-26 14:41:54.278142987 +0000 UTC m=+1692.751463500" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.564490 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.634342 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.634469 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.634488 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.638688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.638747 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46tsw\" (UniqueName: \"kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.638790 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.638891 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.638971 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts\") pod \"886e92c7-5f48-464f-87d9-4bac65b13ea6\" (UID: \"886e92c7-5f48-464f-87d9-4bac65b13ea6\") " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.642819 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs" (OuterVolumeSpecName: "logs") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.644568 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.653951 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw" (OuterVolumeSpecName: "kube-api-access-46tsw") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "kube-api-access-46tsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.655719 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts" (OuterVolumeSpecName: "scripts") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.718939 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.747031 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46tsw\" (UniqueName: \"kubernetes.io/projected/886e92c7-5f48-464f-87d9-4bac65b13ea6-kube-api-access-46tsw\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.747262 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.747329 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.747389 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.747438 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/886e92c7-5f48-464f-87d9-4bac65b13ea6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.757301 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.760284 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12" (OuterVolumeSpecName: "glance") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.791117 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data" (OuterVolumeSpecName: "config-data") pod "886e92c7-5f48-464f-87d9-4bac65b13ea6" (UID: "886e92c7-5f48-464f-87d9-4bac65b13ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.849557 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") on node \"crc\" " Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.849808 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.849910 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/886e92c7-5f48-464f-87d9-4bac65b13ea6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.919741 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.920171 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12") on node "crc" Feb 26 14:41:54 crc kubenswrapper[4809]: I0226 14:41:54.952863 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.052627 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.136938 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.289377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"886e92c7-5f48-464f-87d9-4bac65b13ea6","Type":"ContainerDied","Data":"afd283439fa30193e2c2307b84ee0fe770a6c360f68c66309757f5772322b6e2"} Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.289442 4809 scope.go:117] "RemoveContainer" containerID="ffe71b0048809032c3a86b2d95a3454513fa7981ff8429d870bad86f6812a6d2" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.289640 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.302299 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerStarted","Data":"4f254bd3403846102a75fa2d0d33d6044d17af5eeeffbc3d70364b05bd527122"} Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.344471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerStarted","Data":"e6759d886886099da605c313f69a359003ead71dc7ef6f82ad6a061bdc5a3376"} Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.344769 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api-log" containerID="cri-o://ae9a34fbd75beecd2367a9c4a9d5febadcbaab777069683e36cc5932f1ab15e6" gracePeriod=30 Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.344903 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.345096 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api" containerID="cri-o://e6759d886886099da605c313f69a359003ead71dc7ef6f82ad6a061bdc5a3376" gracePeriod=30 Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.349726 4809 scope.go:117] "RemoveContainer" containerID="b93f8f4a9d43b5da6538855925f643937ae169dddf139b68bf41ca41edc8ea54" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.406079 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.427150 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.479493 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:55 crc kubenswrapper[4809]: E0226 14:41:55.480255 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-httpd" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.480280 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-httpd" Feb 26 14:41:55 crc kubenswrapper[4809]: E0226 14:41:55.480302 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-log" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.480310 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-log" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.480577 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-httpd" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.480609 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" containerName="glance-log" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.482200 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.494344 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.494491 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.496620 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.496591985 podStartE2EDuration="6.496591985s" podCreationTimestamp="2026-02-26 14:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:55.388717512 +0000 UTC m=+1693.862038045" watchObservedRunningTime="2026-02-26 14:41:55.496591985 +0000 UTC m=+1693.969912498" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.554799 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.566752 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567514 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-logs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567626 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgwc\" (UniqueName: \"kubernetes.io/projected/0a901e0d-8105-4ba3-a31f-71ec7e54983f-kube-api-access-cqgwc\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.567880 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.570728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.676975 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677093 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-logs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677152 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgwc\" (UniqueName: \"kubernetes.io/projected/0a901e0d-8105-4ba3-a31f-71ec7e54983f-kube-api-access-cqgwc\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677178 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677674 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.677699 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a901e0d-8105-4ba3-a31f-71ec7e54983f-logs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.678101 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.678181 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.681410 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-config-data\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.682391 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-scripts\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.683420 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.683446 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5430fcd2916c7b014ac0286e22544a8396394be2a5cb5110057444f013914bf0/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.687430 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.688980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a901e0d-8105-4ba3-a31f-71ec7e54983f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.715816 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgwc\" (UniqueName: \"kubernetes.io/projected/0a901e0d-8105-4ba3-a31f-71ec7e54983f-kube-api-access-cqgwc\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.799496 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46878b92-db1a-4d89-81e7-d84c9ae76e12\") pod \"glance-default-external-api-0\" (UID: \"0a901e0d-8105-4ba3-a31f-71ec7e54983f\") " pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.848642 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 14:41:55 crc kubenswrapper[4809]: I0226 14:41:55.857397 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.277049 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886e92c7-5f48-464f-87d9-4bac65b13ea6" path="/var/lib/kubelet/pods/886e92c7-5f48-464f-87d9-4bac65b13ea6/volumes" Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.381442 4809 generic.go:334] "Generic (PLEG): container finished" podID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerID="e6759d886886099da605c313f69a359003ead71dc7ef6f82ad6a061bdc5a3376" exitCode=0 Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.381472 4809 generic.go:334] "Generic (PLEG): container finished" podID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerID="ae9a34fbd75beecd2367a9c4a9d5febadcbaab777069683e36cc5932f1ab15e6" exitCode=143 Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.381542 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerDied","Data":"e6759d886886099da605c313f69a359003ead71dc7ef6f82ad6a061bdc5a3376"} Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.381614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerDied","Data":"ae9a34fbd75beecd2367a9c4a9d5febadcbaab777069683e36cc5932f1ab15e6"} Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.385272 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerStarted","Data":"503474037d2877e322d5ff8bcd6913be20f6e5caf0c072b236b2f93be9e77a76"} Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.385282 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dkbf" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="registry-server" containerID="cri-o://05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4" gracePeriod=2 Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.431672 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.11887918 podStartE2EDuration="7.431654176s" podCreationTimestamp="2026-02-26 14:41:49 +0000 UTC" firstStartedPulling="2026-02-26 14:41:50.625075679 +0000 UTC m=+1689.098396202" lastFinishedPulling="2026-02-26 14:41:53.937850665 +0000 UTC m=+1692.411171198" observedRunningTime="2026-02-26 14:41:56.418481282 +0000 UTC m=+1694.891801805" watchObservedRunningTime="2026-02-26 14:41:56.431654176 +0000 UTC m=+1694.904974699" Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.596931 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.617083 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.202:9292/healthcheck\": read tcp 10.217.0.2:51722->10.217.0.202:9292: read: connection reset by peer" Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.617103 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9292/healthcheck\": read tcp 10.217.0.2:51712->10.217.0.202:9292: read: connection reset by peer" Feb 26 14:41:56 crc kubenswrapper[4809]: W0226 14:41:56.629850 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a901e0d_8105_4ba3_a31f_71ec7e54983f.slice/crio-83ea26c41c43e81cddd224b1e2d2fbd1d3b338119807faa770c00da905915341 WatchSource:0}: Error finding container 83ea26c41c43e81cddd224b1e2d2fbd1d3b338119807faa770c00da905915341: Status 404 returned error can't find the container with id 83ea26c41c43e81cddd224b1e2d2fbd1d3b338119807faa770c00da905915341 Feb 26 14:41:56 crc kubenswrapper[4809]: I0226 14:41:56.988276 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.047920 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048086 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048149 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048194 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048215 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048326 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.048384 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnzm5\" (UniqueName: \"kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5\") pod \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\" (UID: \"7d40b311-e9b9-4212-bcb5-5998a9fca6b3\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.056425 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5" (OuterVolumeSpecName: "kube-api-access-bnzm5") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "kube-api-access-bnzm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.059344 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs" (OuterVolumeSpecName: "logs") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.059404 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.066389 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts" (OuterVolumeSpecName: "scripts") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.069197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.089901 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.111112 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.154706 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities\") pod \"320e4313-8e76-46bc-97b4-7a2a1c33138f\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.154841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r46gx\" (UniqueName: \"kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx\") pod \"320e4313-8e76-46bc-97b4-7a2a1c33138f\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155128 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content\") pod \"320e4313-8e76-46bc-97b4-7a2a1c33138f\" (UID: \"320e4313-8e76-46bc-97b4-7a2a1c33138f\") " Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155668 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155686 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155696 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155707 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155715 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.155725 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnzm5\" (UniqueName: \"kubernetes.io/projected/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-kube-api-access-bnzm5\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.159205 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities" (OuterVolumeSpecName: "utilities") pod "320e4313-8e76-46bc-97b4-7a2a1c33138f" (UID: "320e4313-8e76-46bc-97b4-7a2a1c33138f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.161237 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx" (OuterVolumeSpecName: "kube-api-access-r46gx") pod "320e4313-8e76-46bc-97b4-7a2a1c33138f" (UID: "320e4313-8e76-46bc-97b4-7a2a1c33138f"). InnerVolumeSpecName "kube-api-access-r46gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.164459 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data" (OuterVolumeSpecName: "config-data") pod "7d40b311-e9b9-4212-bcb5-5998a9fca6b3" (UID: "7d40b311-e9b9-4212-bcb5-5998a9fca6b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.232704 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320e4313-8e76-46bc-97b4-7a2a1c33138f" (UID: "320e4313-8e76-46bc-97b4-7a2a1c33138f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.258992 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r46gx\" (UniqueName: \"kubernetes.io/projected/320e4313-8e76-46bc-97b4-7a2a1c33138f-kube-api-access-r46gx\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.259032 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d40b311-e9b9-4212-bcb5-5998a9fca6b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.259043 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.259052 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320e4313-8e76-46bc-97b4-7a2a1c33138f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.413309 4809 generic.go:334] "Generic (PLEG): container finished" podID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerID="79a9eebe02422f3d3a7746a343b67bd18a589ac6384fdd2f0ca7b94fd5ce302b" exitCode=0 Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.413369 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerDied","Data":"79a9eebe02422f3d3a7746a343b67bd18a589ac6384fdd2f0ca7b94fd5ce302b"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.415002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a901e0d-8105-4ba3-a31f-71ec7e54983f","Type":"ContainerStarted","Data":"83ea26c41c43e81cddd224b1e2d2fbd1d3b338119807faa770c00da905915341"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.417531 4809 generic.go:334] "Generic (PLEG): container finished" podID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerID="55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" exitCode=0 Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.417590 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" event={"ID":"41f133cb-dc08-41e2-beeb-243ce04699a4","Type":"ContainerDied","Data":"55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.420330 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7d40b311-e9b9-4212-bcb5-5998a9fca6b3","Type":"ContainerDied","Data":"c3effd6fe10140f5e9773f762adece3b2f49303fd3c98d80024f11b0980ad02e"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.420372 4809 scope.go:117] "RemoveContainer" containerID="e6759d886886099da605c313f69a359003ead71dc7ef6f82ad6a061bdc5a3376" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.420476 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.439527 4809 generic.go:334] "Generic (PLEG): container finished" podID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerID="05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4" exitCode=0 Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.441199 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dkbf" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.441779 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerDied","Data":"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.441850 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dkbf" event={"ID":"320e4313-8e76-46bc-97b4-7a2a1c33138f","Type":"ContainerDied","Data":"63d61a1c0d2ab78624c6814535a3a370dbd3386297b8ca8be66447929e557980"} Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.479994 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.500183 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550082 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:57 crc kubenswrapper[4809]: E0226 14:41:57.550696 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="extract-content" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550719 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="extract-content" Feb 26 14:41:57 crc kubenswrapper[4809]: E0226 14:41:57.550738 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550746 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api" Feb 26 14:41:57 crc kubenswrapper[4809]: E0226 14:41:57.550761 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api-log" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550769 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api-log" Feb 26 14:41:57 crc kubenswrapper[4809]: E0226 14:41:57.550804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="registry-server" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550813 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="registry-server" Feb 26 14:41:57 crc kubenswrapper[4809]: E0226 14:41:57.550830 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="extract-utilities" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.550855 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="extract-utilities" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.576982 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" containerName="registry-server" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.577079 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api-log" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.577115 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" containerName="cinder-api" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.608537 4809 scope.go:117] "RemoveContainer" containerID="ae9a34fbd75beecd2367a9c4a9d5febadcbaab777069683e36cc5932f1ab15e6" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.622886 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.636116 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.638624 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.638770 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.640708 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.727195 4809 scope.go:117] "RemoveContainer" containerID="05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.727324 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749232 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749316 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749338 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749396 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-scripts\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749518 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g6fq\" (UniqueName: \"kubernetes.io/projected/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-kube-api-access-6g6fq\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749579 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-logs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.749637 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data-custom\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.776056 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dkbf"] Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852654 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g6fq\" (UniqueName: \"kubernetes.io/projected/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-kube-api-access-6g6fq\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852767 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-logs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852803 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852847 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data-custom\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852901 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852946 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.852964 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.853031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-scripts\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.853093 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.854711 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-logs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.859166 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.869629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.897835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data-custom\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.898027 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-scripts\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.898275 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-config-data\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.898317 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g6fq\" (UniqueName: \"kubernetes.io/projected/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-kube-api-access-6g6fq\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.898348 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.898625 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc\") " pod="openstack/cinder-api-0" Feb 26 14:41:57 crc kubenswrapper[4809]: I0226 14:41:57.989702 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.040588 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.087649 4809 scope.go:117] "RemoveContainer" containerID="5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.155287 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.169198 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.169543 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.169676 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.169793 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.273585 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459" (OuterVolumeSpecName: "kube-api-access-zr459") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "kube-api-access-zr459". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.274253 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.278565 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.280690 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.280749 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m644x\" (UniqueName: \"kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.280791 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.280995 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.281028 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.281070 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.281090 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.281482 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.281773 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.282151 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs\") pod \"c7b0f56e-4e24-4d34-9576-ede63401881a\" (UID: \"c7b0f56e-4e24-4d34-9576-ede63401881a\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.282178 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") pod \"41f133cb-dc08-41e2-beeb-243ce04699a4\" (UID: \"41f133cb-dc08-41e2-beeb-243ce04699a4\") " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.282991 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.283005 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: W0226 14:41:58.284576 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/41f133cb-dc08-41e2-beeb-243ce04699a4/volumes/kubernetes.io~secret/config-data-custom Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.284591 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.284915 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs" (OuterVolumeSpecName: "logs") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: W0226 14:41:58.284973 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/41f133cb-dc08-41e2-beeb-243ce04699a4/volumes/kubernetes.io~projected/kube-api-access-zr459 Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.285046 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459" (OuterVolumeSpecName: "kube-api-access-zr459") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "kube-api-access-zr459". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.287862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x" (OuterVolumeSpecName: "kube-api-access-m644x") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "kube-api-access-m644x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.312118 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts" (OuterVolumeSpecName: "scripts") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.317967 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24" (OuterVolumeSpecName: "glance") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "pvc-6413f57d-8568-4541-9777-75a4b04caf24". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.341516 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320e4313-8e76-46bc-97b4-7a2a1c33138f" path="/var/lib/kubelet/pods/320e4313-8e76-46bc-97b4-7a2a1c33138f/volumes" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.342879 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d40b311-e9b9-4212-bcb5-5998a9fca6b3" path="/var/lib/kubelet/pods/7d40b311-e9b9-4212-bcb5-5998a9fca6b3/volumes" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.350805 4809 scope.go:117] "RemoveContainer" containerID="3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.383061 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.384745 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data" (OuterVolumeSpecName: "config-data") pod "41f133cb-dc08-41e2-beeb-243ce04699a4" (UID: "41f133cb-dc08-41e2-beeb-243ce04699a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.385503 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") on node \"crc\" " Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.385845 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b0f56e-4e24-4d34-9576-ede63401881a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386704 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386726 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m644x\" (UniqueName: \"kubernetes.io/projected/c7b0f56e-4e24-4d34-9576-ede63401881a-kube-api-access-m644x\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386740 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f133cb-dc08-41e2-beeb-243ce04699a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386754 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr459\" (UniqueName: \"kubernetes.io/projected/41f133cb-dc08-41e2-beeb-243ce04699a4-kube-api-access-zr459\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386763 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.386775 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.416129 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data" (OuterVolumeSpecName: "config-data") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.428664 4809 scope.go:117] "RemoveContainer" containerID="05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4" Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.429608 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4\": container with ID starting with 05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4 not found: ID does not exist" containerID="05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.429657 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4"} err="failed to get container status \"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4\": rpc error: code = NotFound desc = could not find container \"05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4\": container with ID starting with 05c93bfcd4ba1b668750a220be7861bf40f3de138cec285a4978fe416d9b16e4 not found: ID does not exist" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.429687 4809 scope.go:117] "RemoveContainer" containerID="5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0" Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.430272 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0\": container with ID starting with 5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0 not found: ID does not exist" containerID="5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.430300 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0"} err="failed to get container status \"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0\": rpc error: code = NotFound desc = could not find container \"5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0\": container with ID starting with 5735101a6e2baec20c3efe2bffc662af0ee037600458d8f2489ebdd9319c72e0 not found: ID does not exist" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.430322 4809 scope.go:117] "RemoveContainer" containerID="3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b" Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.430536 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b\": container with ID starting with 3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b not found: ID does not exist" containerID="3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.430560 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b"} err="failed to get container status \"3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b\": rpc error: code = NotFound desc = could not find container \"3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b\": container with ID starting with 3aaf298d409a0c5e155b8be020adfbe259655875be4e0a0a33ca47752bb5ee3b not found: ID does not exist" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.454717 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.455981 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6413f57d-8568-4541-9777-75a4b04caf24" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24") on node "crc" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.458908 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7b0f56e-4e24-4d34-9576-ede63401881a" (UID: "c7b0f56e-4e24-4d34-9576-ede63401881a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.470628 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.470637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7b0f56e-4e24-4d34-9576-ede63401881a","Type":"ContainerDied","Data":"819a881da657772048c68f2982b2e13df1eb963e21c298e64c4d73905e4779ae"} Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.470703 4809 scope.go:117] "RemoveContainer" containerID="79a9eebe02422f3d3a7746a343b67bd18a589ac6384fdd2f0ca7b94fd5ce302b" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.490103 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.490134 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.490145 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b0f56e-4e24-4d34-9576-ede63401881a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.515068 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a901e0d-8105-4ba3-a31f-71ec7e54983f","Type":"ContainerStarted","Data":"a5f7e3d3b2a1b32dc0b424ed3232f6dc60c1d3a50adb35f264b88ee3022ff017"} Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.518424 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" event={"ID":"41f133cb-dc08-41e2-beeb-243ce04699a4","Type":"ContainerDied","Data":"1a368b3eec2590360dcb7e1d2c6c08dab81937d52a293902bf6b4e314cc6dfd0"} Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.518565 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-56dc8f9c4c-xv6sb" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.642124 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.653753 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.671952 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.677732 4809 scope.go:117] "RemoveContainer" containerID="8549972de902dfac038d362fa4c9f8ae04a7dacfc0868f75a91ef1c9e089a614" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.684412 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-56dc8f9c4c-xv6sb"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.699954 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.700471 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-log" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700487 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-log" Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.700533 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerName="heat-engine" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700542 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerName="heat-engine" Feb 26 14:41:58 crc kubenswrapper[4809]: E0226 14:41:58.700563 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-httpd" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700571 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-httpd" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700777 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-httpd" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700803 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" containerName="glance-log" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.700818 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" containerName="heat-engine" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.702688 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.705700 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.706057 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.709845 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.759171 4809 scope.go:117] "RemoveContainer" containerID="55aa99cd6198fe3a9257bf3cdc8822effa9b0deaf4fc448bb20beedc19c43011" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.793400 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796089 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796148 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtrk\" (UniqueName: \"kubernetes.io/projected/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-kube-api-access-pbtrk\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796183 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796353 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.796468 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898492 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898538 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898588 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898604 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtrk\" (UniqueName: \"kubernetes.io/projected/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-kube-api-access-pbtrk\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.898700 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.899616 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.899928 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.903633 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.903664 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/290bcfa2d95fcff9bcdffee07cdf41f807340e8945582a9224b291984718a620/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.904764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.922881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.923193 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.925794 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.933249 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtrk\" (UniqueName: \"kubernetes.io/projected/f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83-kube-api-access-pbtrk\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:58 crc kubenswrapper[4809]: I0226 14:41:58.977193 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6413f57d-8568-4541-9777-75a4b04caf24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6413f57d-8568-4541-9777-75a4b04caf24\") pod \"glance-default-internal-api-0\" (UID: \"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83\") " pod="openstack/glance-default-internal-api-0" Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.035239 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.567396 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0a901e0d-8105-4ba3-a31f-71ec7e54983f","Type":"ContainerStarted","Data":"fc514898ea014459165c6caa442a3dbfc5037bfc51960862b49c617e3a395a5d"} Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.576910 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc","Type":"ContainerStarted","Data":"d1c23397091d9156ae34f3a4c2591bd2309d7f0da0afbacce56142481841eae4"} Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.611273 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.61125193 podStartE2EDuration="4.61125193s" podCreationTimestamp="2026-02-26 14:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:41:59.596441709 +0000 UTC m=+1698.069762232" watchObservedRunningTime="2026-02-26 14:41:59.61125193 +0000 UTC m=+1698.084572453" Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.746989 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.770241 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 14:41:59 crc kubenswrapper[4809]: I0226 14:41:59.993248 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.086249 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.086815 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="dnsmasq-dns" containerID="cri-o://130994327b5f0dedf6502820e7c91a22ce78c4ce4659dcffcdc409692d4b2195" gracePeriod=10 Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.161624 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535282-9wjps"] Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.163455 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.166093 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.166193 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.171812 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.181766 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-9wjps"] Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.242474 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn\") pod \"auto-csr-approver-29535282-9wjps\" (UID: \"fcebfa0a-afe4-41c4-9812-988cbc677e95\") " pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.289010 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f133cb-dc08-41e2-beeb-243ce04699a4" path="/var/lib/kubelet/pods/41f133cb-dc08-41e2-beeb-243ce04699a4/volumes" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.289671 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b0f56e-4e24-4d34-9576-ede63401881a" path="/var/lib/kubelet/pods/c7b0f56e-4e24-4d34-9576-ede63401881a/volumes" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.348533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn\") pod \"auto-csr-approver-29535282-9wjps\" (UID: \"fcebfa0a-afe4-41c4-9812-988cbc677e95\") " pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.376591 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn\") pod \"auto-csr-approver-29535282-9wjps\" (UID: \"fcebfa0a-afe4-41c4-9812-988cbc677e95\") " pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.529309 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.707217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc","Type":"ContainerStarted","Data":"3665c04eb74191ad1d726b1ff282ff5520e3314f4bf294444f2055fcec8307dc"} Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.720483 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerID="130994327b5f0dedf6502820e7c91a22ce78c4ce4659dcffcdc409692d4b2195" exitCode=0 Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.720595 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerDied","Data":"130994327b5f0dedf6502820e7c91a22ce78c4ce4659dcffcdc409692d4b2195"} Feb 26 14:42:00 crc kubenswrapper[4809]: I0226 14:42:00.739868 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83","Type":"ContainerStarted","Data":"5400816dfe9363f1c61118543b4da6e60cf1c6829df4fcd696d5f53fea6a9dd5"} Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.158128 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.204915 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nfjx\" (UniqueName: \"kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.206168 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.206247 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.206368 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.206414 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.206433 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config\") pod \"ad52f778-5778-459b-82e3-3d112e3d69d5\" (UID: \"ad52f778-5778-459b-82e3-3d112e3d69d5\") " Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.217311 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx" (OuterVolumeSpecName: "kube-api-access-7nfjx") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "kube-api-access-7nfjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.310078 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nfjx\" (UniqueName: \"kubernetes.io/projected/ad52f778-5778-459b-82e3-3d112e3d69d5-kube-api-access-7nfjx\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.421984 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-9wjps"] Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.495592 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.497515 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.507407 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.525642 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.525676 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.525685 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.538772 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.538928 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config" (OuterVolumeSpecName: "config") pod "ad52f778-5778-459b-82e3-3d112e3d69d5" (UID: "ad52f778-5778-459b-82e3-3d112e3d69d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.629605 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.629645 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad52f778-5778-459b-82e3-3d112e3d69d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.771444 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc","Type":"ContainerStarted","Data":"2840d47e0a1d64921ab4e6f1ac029ace06f314f994e6d451f5965b965f24d161"} Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.773045 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.776409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-9wjps" event={"ID":"fcebfa0a-afe4-41c4-9812-988cbc677e95","Type":"ContainerStarted","Data":"7823e3d22f65f4cee0ce6489a61d5947f4f2192e409c454633bfd4d519432975"} Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.780942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" event={"ID":"ad52f778-5778-459b-82e3-3d112e3d69d5","Type":"ContainerDied","Data":"de5cc63307f4c69e036452a711022072cc07d6790721368487fc75d32e377e99"} Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.780993 4809 scope.go:117] "RemoveContainer" containerID="130994327b5f0dedf6502820e7c91a22ce78c4ce4659dcffcdc409692d4b2195" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.781161 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5847c5b965-5f9r8" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.797258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83","Type":"ContainerStarted","Data":"ecb62aa7db8426e92a6d1c7db1fb683c8fa2a0a3131ee20c7485a225653c9954"} Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.815659 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:01 crc kubenswrapper[4809]: > Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.816917 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.816900978 podStartE2EDuration="4.816900978s" podCreationTimestamp="2026-02-26 14:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:01.7979575 +0000 UTC m=+1700.271278033" watchObservedRunningTime="2026-02-26 14:42:01.816900978 +0000 UTC m=+1700.290221501" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.848239 4809 scope.go:117] "RemoveContainer" containerID="d36d65466db4edb3be63d7eaf1c98aad2395282f8f1cd790d8042292acaf3aec" Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.850092 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:42:01 crc kubenswrapper[4809]: I0226 14:42:01.864342 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5847c5b965-5f9r8"] Feb 26 14:42:02 crc kubenswrapper[4809]: I0226 14:42:02.270433 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" path="/var/lib/kubelet/pods/ad52f778-5778-459b-82e3-3d112e3d69d5/volumes" Feb 26 14:42:02 crc kubenswrapper[4809]: I0226 14:42:02.783525 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:42:02 crc kubenswrapper[4809]: I0226 14:42:02.831913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83","Type":"ContainerStarted","Data":"1b25c44560f2792ce21afb085618e3e1a59cc4fbe98abfa6751c414f96be5292"} Feb 26 14:42:02 crc kubenswrapper[4809]: I0226 14:42:02.908308 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.908286258 podStartE2EDuration="4.908286258s" podCreationTimestamp="2026-02-26 14:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:02.884136102 +0000 UTC m=+1701.357456645" watchObservedRunningTime="2026-02-26 14:42:02.908286258 +0000 UTC m=+1701.381606781" Feb 26 14:42:03 crc kubenswrapper[4809]: I0226 14:42:03.074686 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:03 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:03 crc kubenswrapper[4809]: > Feb 26 14:42:04 crc kubenswrapper[4809]: I0226 14:42:04.859730 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-9wjps" event={"ID":"fcebfa0a-afe4-41c4-9812-988cbc677e95","Type":"ContainerStarted","Data":"481a4e5754d12714d6896ac5aae8451ac71c85cf679ef4722ff91f8ec7d4773d"} Feb 26 14:42:04 crc kubenswrapper[4809]: I0226 14:42:04.879843 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535282-9wjps" podStartSLOduration=3.258245554 podStartE2EDuration="4.879825859s" podCreationTimestamp="2026-02-26 14:42:00 +0000 UTC" firstStartedPulling="2026-02-26 14:42:01.431284929 +0000 UTC m=+1699.904605442" lastFinishedPulling="2026-02-26 14:42:03.052865224 +0000 UTC m=+1701.526185747" observedRunningTime="2026-02-26 14:42:04.873551281 +0000 UTC m=+1703.346871834" watchObservedRunningTime="2026-02-26 14:42:04.879825859 +0000 UTC m=+1703.353146382" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.097545 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dfdt2"] Feb 26 14:42:05 crc kubenswrapper[4809]: E0226 14:42:05.098180 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="dnsmasq-dns" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.098203 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="dnsmasq-dns" Feb 26 14:42:05 crc kubenswrapper[4809]: E0226 14:42:05.098220 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="init" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.098226 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="init" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.098469 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad52f778-5778-459b-82e3-3d112e3d69d5" containerName="dnsmasq-dns" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.099283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.128749 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dfdt2"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.150952 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.257493 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.257551 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhgm\" (UniqueName: \"kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.265938 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-t4dwk"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.267549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.304045 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-936d-account-create-update-ll7mb"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.306525 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.318289 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.318563 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.335968 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-936d-account-create-update-ll7mb"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.366048 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.366104 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfhgm\" (UniqueName: \"kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.372385 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.372466 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t4dwk"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.397658 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfhgm\" (UniqueName: \"kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm\") pod \"nova-api-db-create-dfdt2\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.431625 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b0e9-account-create-update-krhvv"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.441890 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.447163 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b0e9-account-create-update-krhvv"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.450909 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.452857 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.470367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.470614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpq45\" (UniqueName: \"kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.470702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8595\" (UniqueName: \"kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.470789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.546072 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8p8fn"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.548146 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.572591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.572686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpq45\" (UniqueName: \"kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.572778 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8595\" (UniqueName: \"kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.572902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.572972 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.573092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6wr\" (UniqueName: \"kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.579570 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.580395 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.633626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8595\" (UniqueName: \"kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595\") pod \"nova-api-936d-account-create-update-ll7mb\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.645644 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.645005 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpq45\" (UniqueName: \"kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45\") pod \"nova-cell0-db-create-t4dwk\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.676978 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.677203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stc2x\" (UniqueName: \"kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.677256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.677380 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6wr\" (UniqueName: \"kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.677907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.695215 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8p8fn"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.755921 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6wr\" (UniqueName: \"kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr\") pod \"nova-cell0-b0e9-account-create-update-krhvv\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.773534 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6455866c87-pbhmh" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.802694 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stc2x\" (UniqueName: \"kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.802797 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.804697 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.812911 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.834148 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ab1d-account-create-update-pzttk"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.842642 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stc2x\" (UniqueName: \"kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x\") pod \"nova-cell1-db-create-8p8fn\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.846538 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.848824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.848859 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.851319 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.897183 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.900893 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab1d-account-create-update-pzttk"] Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.903445 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.934435 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="cinder-scheduler" containerID="cri-o://4f254bd3403846102a75fa2d0d33d6044d17af5eeeffbc3d70364b05bd527122" gracePeriod=30 Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.936133 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="probe" containerID="cri-o://503474037d2877e322d5ff8bcd6913be20f6e5caf0c072b236b2f93be9e77a76" gracePeriod=30 Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.937189 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 14:42:05 crc kubenswrapper[4809]: I0226 14:42:05.941267 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.012145 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.012533 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lw6w\" (UniqueName: \"kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.054324 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.104333 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.104660 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f84bb7b56-576q9" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-api" containerID="cri-o://38591e4eb5542f4971e7324d47e1bc5f751dd92e2b795b0a1bb67640bcc550dc" gracePeriod=30 Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.105294 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f84bb7b56-576q9" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" containerID="cri-o://48ce47a24f289d52f6572ea903dd7e94c903e9ce4b72aa3e109146cb0a2c2898" gracePeriod=30 Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.115369 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.115414 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lw6w\" (UniqueName: \"kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.117993 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.171060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lw6w\" (UniqueName: \"kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w\") pod \"nova-cell1-ab1d-account-create-update-pzttk\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.210543 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.825140 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-936d-account-create-update-ll7mb"] Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.867005 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dfdt2"] Feb 26 14:42:06 crc kubenswrapper[4809]: I0226 14:42:06.884172 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b0e9-account-create-update-krhvv"] Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.025988 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-936d-account-create-update-ll7mb" event={"ID":"d84a8afd-6b9c-4a60-9d4a-3110f1f72045","Type":"ContainerStarted","Data":"8191f3315aed557615a6620ea4143b6fd76f0518aa01f51378624a8378a56c19"} Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.030428 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8p8fn"] Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.030983 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerID="48ce47a24f289d52f6572ea903dd7e94c903e9ce4b72aa3e109146cb0a2c2898" exitCode=0 Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.032685 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerDied","Data":"48ce47a24f289d52f6572ea903dd7e94c903e9ce4b72aa3e109146cb0a2c2898"} Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.032727 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.143567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t4dwk"] Feb 26 14:42:07 crc kubenswrapper[4809]: I0226 14:42:07.188990 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab1d-account-create-update-pzttk"] Feb 26 14:42:07 crc kubenswrapper[4809]: W0226 14:42:07.199081 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3874bc2_4abf_4fb1_9149_ea5cefcf3f70.slice/crio-25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f WatchSource:0}: Error finding container 25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f: Status 404 returned error can't find the container with id 25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.077966 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" event={"ID":"9f72bd78-24ed-4a32-920e-1720c64a2ad3","Type":"ContainerStarted","Data":"7fc2f67e0fb0dd5ccb07faa008722bed1e4077362207a9fe5d8b9366e09e024c"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.078303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" event={"ID":"9f72bd78-24ed-4a32-920e-1720c64a2ad3","Type":"ContainerStarted","Data":"19166aded9cca8f8d88bb57f3cf29108e14dabade973e675963ccff6805d6c1c"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.086404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" event={"ID":"fe4e7da1-aac7-4512-9c26-e948c0fa8e29","Type":"ContainerStarted","Data":"f5650b06635b54b5e8be96da160f3bc46a3cdfc55cd0352966168cff4ba1c6d6"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.086446 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" event={"ID":"fe4e7da1-aac7-4512-9c26-e948c0fa8e29","Type":"ContainerStarted","Data":"d1dca51ede4a721845545ff8044f7ce51a19e0a0fc4946259dfc6fedc255835b"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.096937 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-936d-account-create-update-ll7mb" event={"ID":"d84a8afd-6b9c-4a60-9d4a-3110f1f72045","Type":"ContainerStarted","Data":"94f9b3222c66a5756d98428fc37dda8f3bfa83d31305e0867ff7dc7d3b48cb4f"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.106629 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4dwk" event={"ID":"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70","Type":"ContainerStarted","Data":"13ee9164943c9856ea27b3ed3933e872246a20c9bfc8761744abb03a4b6fc089"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.106675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4dwk" event={"ID":"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70","Type":"ContainerStarted","Data":"25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.123285 4809 generic.go:334] "Generic (PLEG): container finished" podID="76afb306-b352-4274-bc19-1f02f586d784" containerID="503474037d2877e322d5ff8bcd6913be20f6e5caf0c072b236b2f93be9e77a76" exitCode=0 Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.123342 4809 generic.go:334] "Generic (PLEG): container finished" podID="76afb306-b352-4274-bc19-1f02f586d784" containerID="4f254bd3403846102a75fa2d0d33d6044d17af5eeeffbc3d70364b05bd527122" exitCode=0 Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.123418 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerDied","Data":"503474037d2877e322d5ff8bcd6913be20f6e5caf0c072b236b2f93be9e77a76"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.123450 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerDied","Data":"4f254bd3403846102a75fa2d0d33d6044d17af5eeeffbc3d70364b05bd527122"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.129197 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dfdt2" event={"ID":"a12667f1-2d2a-426a-b085-4492d1f57c82","Type":"ContainerStarted","Data":"f35c4d90cfbb95f65e6ed2ad56b1628e96016d07869ce0dc6e2b9e1cae36d587"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.129239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dfdt2" event={"ID":"a12667f1-2d2a-426a-b085-4492d1f57c82","Type":"ContainerStarted","Data":"5d8f1d016ebcc662790ca43f8328a223b1714890d13e2dab14dcf38e12366d43"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.154918 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.155047 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p8fn" event={"ID":"798b5cff-a67c-41a5-9252-d8bda45c5f89","Type":"ContainerStarted","Data":"c7d2b047a828a1773ee1adab26998056fdc37a9e1c69e6c8ad6dde24ffa29ba1"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.155092 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p8fn" event={"ID":"798b5cff-a67c-41a5-9252-d8bda45c5f89","Type":"ContainerStarted","Data":"e6c90bc4cc2523bd112a4d2ef7eb4d7f4b93df11e6d7c7b6f9f5c842caf48056"} Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.194517 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-t4dwk" podStartSLOduration=3.194495658 podStartE2EDuration="3.194495658s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.142586774 +0000 UTC m=+1706.615907297" watchObservedRunningTime="2026-02-26 14:42:08.194495658 +0000 UTC m=+1706.667816181" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.201051 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" podStartSLOduration=3.201000513 podStartE2EDuration="3.201000513s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.11427801 +0000 UTC m=+1706.587598533" watchObservedRunningTime="2026-02-26 14:42:08.201000513 +0000 UTC m=+1706.674321036" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.208541 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-936d-account-create-update-ll7mb" podStartSLOduration=3.208524707 podStartE2EDuration="3.208524707s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.160620996 +0000 UTC m=+1706.633941519" watchObservedRunningTime="2026-02-26 14:42:08.208524707 +0000 UTC m=+1706.681845230" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.253494 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" podStartSLOduration=3.253474453 podStartE2EDuration="3.253474453s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.19736896 +0000 UTC m=+1706.670689483" watchObservedRunningTime="2026-02-26 14:42:08.253474453 +0000 UTC m=+1706.726794976" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.258960 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dfdt2" podStartSLOduration=3.258940828 podStartE2EDuration="3.258940828s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.215716201 +0000 UTC m=+1706.689036724" watchObservedRunningTime="2026-02-26 14:42:08.258940828 +0000 UTC m=+1706.732261351" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.278810 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8p8fn" podStartSLOduration=3.278787202 podStartE2EDuration="3.278787202s" podCreationTimestamp="2026-02-26 14:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:08.232430685 +0000 UTC m=+1706.705751208" watchObservedRunningTime="2026-02-26 14:42:08.278787202 +0000 UTC m=+1706.752107725" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.465747 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.523894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.523957 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.524096 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f2cw\" (UniqueName: \"kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.524137 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.524206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.524233 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data\") pod \"76afb306-b352-4274-bc19-1f02f586d784\" (UID: \"76afb306-b352-4274-bc19-1f02f586d784\") " Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.525127 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.532476 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw" (OuterVolumeSpecName: "kube-api-access-4f2cw") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "kube-api-access-4f2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.532761 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts" (OuterVolumeSpecName: "scripts") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.567175 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.635060 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.648941 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.648968 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.648977 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76afb306-b352-4274-bc19-1f02f586d784-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.648988 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f2cw\" (UniqueName: \"kubernetes.io/projected/76afb306-b352-4274-bc19-1f02f586d784-kube-api-access-4f2cw\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.648997 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.739234 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data" (OuterVolumeSpecName: "config-data") pod "76afb306-b352-4274-bc19-1f02f586d784" (UID: "76afb306-b352-4274-bc19-1f02f586d784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:08 crc kubenswrapper[4809]: I0226 14:42:08.755455 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afb306-b352-4274-bc19-1f02f586d784-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.036130 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.036557 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.075717 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.104299 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.175446 4809 generic.go:334] "Generic (PLEG): container finished" podID="fcebfa0a-afe4-41c4-9812-988cbc677e95" containerID="481a4e5754d12714d6896ac5aae8451ac71c85cf679ef4722ff91f8ec7d4773d" exitCode=0 Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.175499 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-9wjps" event={"ID":"fcebfa0a-afe4-41c4-9812-988cbc677e95","Type":"ContainerDied","Data":"481a4e5754d12714d6896ac5aae8451ac71c85cf679ef4722ff91f8ec7d4773d"} Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.191931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76afb306-b352-4274-bc19-1f02f586d784","Type":"ContainerDied","Data":"dea5d4dd5899312b8bbf9e2533eaf1d59c8c7c70ae0e72ec7be7bb813e90a4fb"} Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.191983 4809 scope.go:117] "RemoveContainer" containerID="503474037d2877e322d5ff8bcd6913be20f6e5caf0c072b236b2f93be9e77a76" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.192231 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.192263 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.192310 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.194978 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.195009 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.232165 4809 scope.go:117] "RemoveContainer" containerID="4f254bd3403846102a75fa2d0d33d6044d17af5eeeffbc3d70364b05bd527122" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.265931 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.303245 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.387840 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:09 crc kubenswrapper[4809]: E0226 14:42:09.388998 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="cinder-scheduler" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.389041 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="cinder-scheduler" Feb 26 14:42:09 crc kubenswrapper[4809]: E0226 14:42:09.389095 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="probe" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.389103 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="probe" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.389621 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="probe" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.389672 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="76afb306-b352-4274-bc19-1f02f586d784" containerName="cinder-scheduler" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.391804 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.398496 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.451298 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476669 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.476724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzkg\" (UniqueName: \"kubernetes.io/projected/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-kube-api-access-nxzkg\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578487 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578574 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxzkg\" (UniqueName: \"kubernetes.io/projected/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-kube-api-access-nxzkg\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578687 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.578754 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.580557 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.586696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-scripts\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.588637 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.589244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-config-data\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.595632 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.602105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxzkg\" (UniqueName: \"kubernetes.io/projected/74917c3f-f22d-43b0-9fbf-6473cb9c6c9d-kube-api-access-nxzkg\") pod \"cinder-scheduler-0\" (UID: \"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d\") " pod="openstack/cinder-scheduler-0" Feb 26 14:42:09 crc kubenswrapper[4809]: I0226 14:42:09.741865 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.205900 4809 generic.go:334] "Generic (PLEG): container finished" podID="c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" containerID="13ee9164943c9856ea27b3ed3933e872246a20c9bfc8761744abb03a4b6fc089" exitCode=0 Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.206256 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4dwk" event={"ID":"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70","Type":"ContainerDied","Data":"13ee9164943c9856ea27b3ed3933e872246a20c9bfc8761744abb03a4b6fc089"} Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.211456 4809 generic.go:334] "Generic (PLEG): container finished" podID="a12667f1-2d2a-426a-b085-4492d1f57c82" containerID="f35c4d90cfbb95f65e6ed2ad56b1628e96016d07869ce0dc6e2b9e1cae36d587" exitCode=0 Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.211521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dfdt2" event={"ID":"a12667f1-2d2a-426a-b085-4492d1f57c82","Type":"ContainerDied","Data":"f35c4d90cfbb95f65e6ed2ad56b1628e96016d07869ce0dc6e2b9e1cae36d587"} Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.213117 4809 generic.go:334] "Generic (PLEG): container finished" podID="798b5cff-a67c-41a5-9252-d8bda45c5f89" containerID="c7d2b047a828a1773ee1adab26998056fdc37a9e1c69e6c8ad6dde24ffa29ba1" exitCode=0 Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.214478 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p8fn" event={"ID":"798b5cff-a67c-41a5-9252-d8bda45c5f89","Type":"ContainerDied","Data":"c7d2b047a828a1773ee1adab26998056fdc37a9e1c69e6c8ad6dde24ffa29ba1"} Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.303554 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76afb306-b352-4274-bc19-1f02f586d784" path="/var/lib/kubelet/pods/76afb306-b352-4274-bc19-1f02f586d784/volumes" Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.304681 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.918643 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.930709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn\") pod \"fcebfa0a-afe4-41c4-9812-988cbc677e95\" (UID: \"fcebfa0a-afe4-41c4-9812-988cbc677e95\") " Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.945687 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn" (OuterVolumeSpecName: "kube-api-access-xd7gn") pod "fcebfa0a-afe4-41c4-9812-988cbc677e95" (UID: "fcebfa0a-afe4-41c4-9812-988cbc677e95"). InnerVolumeSpecName "kube-api-access-xd7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:10 crc kubenswrapper[4809]: I0226 14:42:10.952645 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/fcebfa0a-afe4-41c4-9812-988cbc677e95-kube-api-access-xd7gn\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.091048 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.227275 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d","Type":"ContainerStarted","Data":"b439e6248c0eedd5297653307bb95d34e31c36ac7282d9c79f8d95bbb77ee7cf"} Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.236527 4809 generic.go:334] "Generic (PLEG): container finished" podID="9f72bd78-24ed-4a32-920e-1720c64a2ad3" containerID="7fc2f67e0fb0dd5ccb07faa008722bed1e4077362207a9fe5d8b9366e09e024c" exitCode=0 Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.236630 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" event={"ID":"9f72bd78-24ed-4a32-920e-1720c64a2ad3","Type":"ContainerDied","Data":"7fc2f67e0fb0dd5ccb07faa008722bed1e4077362207a9fe5d8b9366e09e024c"} Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.244142 4809 generic.go:334] "Generic (PLEG): container finished" podID="fe4e7da1-aac7-4512-9c26-e948c0fa8e29" containerID="f5650b06635b54b5e8be96da160f3bc46a3cdfc55cd0352966168cff4ba1c6d6" exitCode=0 Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.244335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" event={"ID":"fe4e7da1-aac7-4512-9c26-e948c0fa8e29","Type":"ContainerDied","Data":"f5650b06635b54b5e8be96da160f3bc46a3cdfc55cd0352966168cff4ba1c6d6"} Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.247899 4809 generic.go:334] "Generic (PLEG): container finished" podID="d84a8afd-6b9c-4a60-9d4a-3110f1f72045" containerID="94f9b3222c66a5756d98428fc37dda8f3bfa83d31305e0867ff7dc7d3b48cb4f" exitCode=0 Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.248039 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-936d-account-create-update-ll7mb" event={"ID":"d84a8afd-6b9c-4a60-9d4a-3110f1f72045","Type":"ContainerDied","Data":"94f9b3222c66a5756d98428fc37dda8f3bfa83d31305e0867ff7dc7d3b48cb4f"} Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.256106 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535282-9wjps" event={"ID":"fcebfa0a-afe4-41c4-9812-988cbc677e95","Type":"ContainerDied","Data":"7823e3d22f65f4cee0ce6489a61d5947f4f2192e409c454633bfd4d519432975"} Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.256159 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7823e3d22f65f4cee0ce6489a61d5947f4f2192e409c454633bfd4d519432975" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.256236 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535282-9wjps" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.256331 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.256350 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.316361 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-vvq4j"] Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.343453 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535276-vvq4j"] Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.367728 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.608124 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.608486 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.609501 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.800369 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.800422 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:42:11 crc kubenswrapper[4809]: I0226 14:42:11.839191 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:11 crc kubenswrapper[4809]: > Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.019833 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.235641 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts\") pod \"a12667f1-2d2a-426a-b085-4492d1f57c82\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.235911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfhgm\" (UniqueName: \"kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm\") pod \"a12667f1-2d2a-426a-b085-4492d1f57c82\" (UID: \"a12667f1-2d2a-426a-b085-4492d1f57c82\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.237143 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a12667f1-2d2a-426a-b085-4492d1f57c82" (UID: "a12667f1-2d2a-426a-b085-4492d1f57c82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.268319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm" (OuterVolumeSpecName: "kube-api-access-mfhgm") pod "a12667f1-2d2a-426a-b085-4492d1f57c82" (UID: "a12667f1-2d2a-426a-b085-4492d1f57c82"). InnerVolumeSpecName "kube-api-access-mfhgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.338937 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfhgm\" (UniqueName: \"kubernetes.io/projected/a12667f1-2d2a-426a-b085-4492d1f57c82-kube-api-access-mfhgm\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.339295 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12667f1-2d2a-426a-b085-4492d1f57c82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.365951 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.369799 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.370909 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb73f4b-6f13-4340-a250-fd39e979a4e3" path="/var/lib/kubelet/pods/5eb73f4b-6f13-4340-a250-fd39e979a4e3/volumes" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.386396 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t4dwk" event={"ID":"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70","Type":"ContainerDied","Data":"25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f"} Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.387040 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ea4dbfab8905c13e11e976e32667607d333af100121c8d764b89d0981ae18f" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.403768 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dfdt2" event={"ID":"a12667f1-2d2a-426a-b085-4492d1f57c82","Type":"ContainerDied","Data":"5d8f1d016ebcc662790ca43f8328a223b1714890d13e2dab14dcf38e12366d43"} Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.403824 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8f1d016ebcc662790ca43f8328a223b1714890d13e2dab14dcf38e12366d43" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.403842 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dfdt2" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.445242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpq45\" (UniqueName: \"kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45\") pod \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.445485 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stc2x\" (UniqueName: \"kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x\") pod \"798b5cff-a67c-41a5-9252-d8bda45c5f89\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.445565 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts\") pod \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\" (UID: \"c3874bc2-4abf-4fb1-9149-ea5cefcf3f70\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.445637 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts\") pod \"798b5cff-a67c-41a5-9252-d8bda45c5f89\" (UID: \"798b5cff-a67c-41a5-9252-d8bda45c5f89\") " Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.446494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" (UID: "c3874bc2-4abf-4fb1-9149-ea5cefcf3f70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.447607 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "798b5cff-a67c-41a5-9252-d8bda45c5f89" (UID: "798b5cff-a67c-41a5-9252-d8bda45c5f89"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.457667 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798b5cff-a67c-41a5-9252-d8bda45c5f89-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.457699 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.460139 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x" (OuterVolumeSpecName: "kube-api-access-stc2x") pod "798b5cff-a67c-41a5-9252-d8bda45c5f89" (UID: "798b5cff-a67c-41a5-9252-d8bda45c5f89"). InnerVolumeSpecName "kube-api-access-stc2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.460186 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45" (OuterVolumeSpecName: "kube-api-access-zpq45") pod "c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" (UID: "c3874bc2-4abf-4fb1-9149-ea5cefcf3f70"). InnerVolumeSpecName "kube-api-access-zpq45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.465986 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerID="38591e4eb5542f4971e7324d47e1bc5f751dd92e2b795b0a1bb67640bcc550dc" exitCode=0 Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.466093 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerDied","Data":"38591e4eb5542f4971e7324d47e1bc5f751dd92e2b795b0a1bb67640bcc550dc"} Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.468477 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p8fn" event={"ID":"798b5cff-a67c-41a5-9252-d8bda45c5f89","Type":"ContainerDied","Data":"e6c90bc4cc2523bd112a4d2ef7eb4d7f4b93df11e6d7c7b6f9f5c842caf48056"} Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.468508 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c90bc4cc2523bd112a4d2ef7eb4d7f4b93df11e6d7c7b6f9f5c842caf48056" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.468560 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p8fn" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.525180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d","Type":"ContainerStarted","Data":"a5e520b7702ac3a5d029935e6a2103723ebb5981eb933900e42afba484122102"} Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.562689 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpq45\" (UniqueName: \"kubernetes.io/projected/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70-kube-api-access-zpq45\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.562719 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stc2x\" (UniqueName: \"kubernetes.io/projected/798b5cff-a67c-41a5-9252-d8bda45c5f89-kube-api-access-stc2x\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.664545 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.664648 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 14:42:12 crc kubenswrapper[4809]: I0226 14:42:12.665492 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.295138 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:13 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:13 crc kubenswrapper[4809]: > Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.542772 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.556372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d","Type":"ContainerStarted","Data":"c226431b8cc9b42d958b949f5856e826c9090c2fece73af6b29aa2fb76049bd2"} Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.569428 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t4dwk" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.571208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f84bb7b56-576q9" event={"ID":"c1463263-0b2d-4c22-8e09-d1dabdb803e4","Type":"ContainerDied","Data":"ab5713db2fdd85289cf3e3a089a36f9ca09c488726d73cda24b41424864cb20e"} Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.571254 4809 scope.go:117] "RemoveContainer" containerID="48ce47a24f289d52f6572ea903dd7e94c903e9ce4b72aa3e109146cb0a2c2898" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.571380 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f84bb7b56-576q9" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.704730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wlf2\" (UniqueName: \"kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2\") pod \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.704831 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config\") pod \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.705049 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs\") pod \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.705081 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config\") pod \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.705147 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle\") pod \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\" (UID: \"c1463263-0b2d-4c22-8e09-d1dabdb803e4\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.709875 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.742694 4809 scope.go:117] "RemoveContainer" containerID="38591e4eb5542f4971e7324d47e1bc5f751dd92e2b795b0a1bb67640bcc550dc" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.744383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c1463263-0b2d-4c22-8e09-d1dabdb803e4" (UID: "c1463263-0b2d-4c22-8e09-d1dabdb803e4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.766835 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2" (OuterVolumeSpecName: "kube-api-access-2wlf2") pod "c1463263-0b2d-4c22-8e09-d1dabdb803e4" (UID: "c1463263-0b2d-4c22-8e09-d1dabdb803e4"). InnerVolumeSpecName "kube-api-access-2wlf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.837331 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wlf2\" (UniqueName: \"kubernetes.io/projected/c1463263-0b2d-4c22-8e09-d1dabdb803e4-kube-api-access-2wlf2\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.837661 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.843752 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config" (OuterVolumeSpecName: "config") pod "c1463263-0b2d-4c22-8e09-d1dabdb803e4" (UID: "c1463263-0b2d-4c22-8e09-d1dabdb803e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.918156 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1463263-0b2d-4c22-8e09-d1dabdb803e4" (UID: "c1463263-0b2d-4c22-8e09-d1dabdb803e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.938775 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts\") pod \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.938968 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6wr\" (UniqueName: \"kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr\") pod \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\" (UID: \"fe4e7da1-aac7-4512-9c26-e948c0fa8e29\") " Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.939692 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.939713 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.940341 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe4e7da1-aac7-4512-9c26-e948c0fa8e29" (UID: "fe4e7da1-aac7-4512-9c26-e948c0fa8e29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.957649 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr" (OuterVolumeSpecName: "kube-api-access-mb6wr") pod "fe4e7da1-aac7-4512-9c26-e948c0fa8e29" (UID: "fe4e7da1-aac7-4512-9c26-e948c0fa8e29"). InnerVolumeSpecName "kube-api-access-mb6wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:13 crc kubenswrapper[4809]: I0226 14:42:13.996728 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.040946 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts\") pod \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.041215 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8595\" (UniqueName: \"kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595\") pod \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\" (UID: \"d84a8afd-6b9c-4a60-9d4a-3110f1f72045\") " Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.041929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d84a8afd-6b9c-4a60-9d4a-3110f1f72045" (UID: "d84a8afd-6b9c-4a60-9d4a-3110f1f72045"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.045304 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.045330 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.045340 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6wr\" (UniqueName: \"kubernetes.io/projected/fe4e7da1-aac7-4512-9c26-e948c0fa8e29-kube-api-access-mb6wr\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.048542 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595" (OuterVolumeSpecName: "kube-api-access-n8595") pod "d84a8afd-6b9c-4a60-9d4a-3110f1f72045" (UID: "d84a8afd-6b9c-4a60-9d4a-3110f1f72045"). InnerVolumeSpecName "kube-api-access-n8595". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.049333 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.094043 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c1463263-0b2d-4c22-8e09-d1dabdb803e4" (UID: "c1463263-0b2d-4c22-8e09-d1dabdb803e4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.146374 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts\") pod \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.146526 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lw6w\" (UniqueName: \"kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w\") pod \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\" (UID: \"9f72bd78-24ed-4a32-920e-1720c64a2ad3\") " Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.147167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f72bd78-24ed-4a32-920e-1720c64a2ad3" (UID: "9f72bd78-24ed-4a32-920e-1720c64a2ad3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.147519 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f72bd78-24ed-4a32-920e-1720c64a2ad3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.147538 4809 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1463263-0b2d-4c22-8e09-d1dabdb803e4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.147553 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8595\" (UniqueName: \"kubernetes.io/projected/d84a8afd-6b9c-4a60-9d4a-3110f1f72045-kube-api-access-n8595\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.150745 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w" (OuterVolumeSpecName: "kube-api-access-4lw6w") pod "9f72bd78-24ed-4a32-920e-1720c64a2ad3" (UID: "9f72bd78-24ed-4a32-920e-1720c64a2ad3"). InnerVolumeSpecName "kube-api-access-4lw6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.254863 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lw6w\" (UniqueName: \"kubernetes.io/projected/9f72bd78-24ed-4a32-920e-1720c64a2ad3-kube-api-access-4lw6w\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.293630 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.302246 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f84bb7b56-576q9"] Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.583968 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" event={"ID":"9f72bd78-24ed-4a32-920e-1720c64a2ad3","Type":"ContainerDied","Data":"19166aded9cca8f8d88bb57f3cf29108e14dabade973e675963ccff6805d6c1c"} Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.584359 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19166aded9cca8f8d88bb57f3cf29108e14dabade973e675963ccff6805d6c1c" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.583990 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab1d-account-create-update-pzttk" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.585556 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.585595 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b0e9-account-create-update-krhvv" event={"ID":"fe4e7da1-aac7-4512-9c26-e948c0fa8e29","Type":"ContainerDied","Data":"d1dca51ede4a721845545ff8044f7ce51a19e0a0fc4946259dfc6fedc255835b"} Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.585644 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1dca51ede4a721845545ff8044f7ce51a19e0a0fc4946259dfc6fedc255835b" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.589095 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-936d-account-create-update-ll7mb" event={"ID":"d84a8afd-6b9c-4a60-9d4a-3110f1f72045","Type":"ContainerDied","Data":"8191f3315aed557615a6620ea4143b6fd76f0518aa01f51378624a8378a56c19"} Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.589141 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8191f3315aed557615a6620ea4143b6fd76f0518aa01f51378624a8378a56c19" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.589141 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-936d-account-create-update-ll7mb" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.620816 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.620791241 podStartE2EDuration="5.620791241s" podCreationTimestamp="2026-02-26 14:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:14.614005158 +0000 UTC m=+1713.087325711" watchObservedRunningTime="2026-02-26 14:42:14.620791241 +0000 UTC m=+1713.094111764" Feb 26 14:42:14 crc kubenswrapper[4809]: I0226 14:42:14.742121 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.973174 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jhstk"] Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.974857 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-api" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.974942 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-api" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975057 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e7da1-aac7-4512-9c26-e948c0fa8e29" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975110 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e7da1-aac7-4512-9c26-e948c0fa8e29" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975187 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12667f1-2d2a-426a-b085-4492d1f57c82" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975246 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12667f1-2d2a-426a-b085-4492d1f57c82" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975313 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84a8afd-6b9c-4a60-9d4a-3110f1f72045" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975366 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84a8afd-6b9c-4a60-9d4a-3110f1f72045" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975669 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f72bd78-24ed-4a32-920e-1720c64a2ad3" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975723 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f72bd78-24ed-4a32-920e-1720c64a2ad3" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975783 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975837 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.975892 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcebfa0a-afe4-41c4-9812-988cbc677e95" containerName="oc" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.975953 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcebfa0a-afe4-41c4-9812-988cbc677e95" containerName="oc" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.976023 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798b5cff-a67c-41a5-9252-d8bda45c5f89" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976074 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="798b5cff-a67c-41a5-9252-d8bda45c5f89" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: E0226 14:42:15.976145 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976197 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976523 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="798b5cff-a67c-41a5-9252-d8bda45c5f89" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976592 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e7da1-aac7-4512-9c26-e948c0fa8e29" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976644 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcebfa0a-afe4-41c4-9812-988cbc677e95" containerName="oc" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976703 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976764 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-api" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976889 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84a8afd-6b9c-4a60-9d4a-3110f1f72045" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.976960 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" containerName="neutron-httpd" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.977042 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12667f1-2d2a-426a-b085-4492d1f57c82" containerName="mariadb-database-create" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.977105 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f72bd78-24ed-4a32-920e-1720c64a2ad3" containerName="mariadb-account-create-update" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.977934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.981438 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gfcnh" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.981612 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.981725 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 14:42:15 crc kubenswrapper[4809]: I0226 14:42:15.988822 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jhstk"] Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.097892 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.098212 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.098465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.098593 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddtq\" (UniqueName: \"kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.201154 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.201485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.201649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.201763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddtq\" (UniqueName: \"kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.210355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.210611 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.220918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddtq\" (UniqueName: \"kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.233739 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts\") pod \"nova-cell0-conductor-db-sync-jhstk\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.274822 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1463263-0b2d-4c22-8e09-d1dabdb803e4" path="/var/lib/kubelet/pods/c1463263-0b2d-4c22-8e09-d1dabdb803e4/volumes" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.308201 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.660566 4809 generic.go:334] "Generic (PLEG): container finished" podID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerID="18b846f463bf1e1e6f69070b3a68841650fd884e7bd375ecacf1b0ba2fe5d5ba" exitCode=137 Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.660676 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerDied","Data":"18b846f463bf1e1e6f69070b3a68841650fd884e7bd375ecacf1b0ba2fe5d5ba"} Feb 26 14:42:16 crc kubenswrapper[4809]: I0226 14:42:16.846892 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jhstk"] Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.033794 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132422 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132543 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87twx\" (UniqueName: \"kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132712 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132771 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132813 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.132951 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd\") pod \"fade2ea0-a1bc-4a71-82ca-515485a96868\" (UID: \"fade2ea0-a1bc-4a71-82ca-515485a96868\") " Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.133956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.134501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.139361 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts" (OuterVolumeSpecName: "scripts") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.158339 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx" (OuterVolumeSpecName: "kube-api-access-87twx") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "kube-api-access-87twx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.218314 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.236420 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87twx\" (UniqueName: \"kubernetes.io/projected/fade2ea0-a1bc-4a71-82ca-515485a96868-kube-api-access-87twx\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.236464 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.236477 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.236488 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.236515 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fade2ea0-a1bc-4a71-82ca-515485a96868-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.262249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.279449 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data" (OuterVolumeSpecName: "config-data") pod "fade2ea0-a1bc-4a71-82ca-515485a96868" (UID: "fade2ea0-a1bc-4a71-82ca-515485a96868"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.339976 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.340047 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fade2ea0-a1bc-4a71-82ca-515485a96868-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.672805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jhstk" event={"ID":"4403ebd6-aa8d-4398-842e-f33ef09117cc","Type":"ContainerStarted","Data":"8986db391a7aa154883de45fbc7f2425b58a5b4db56557392b6dd231a6299e03"} Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.676612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fade2ea0-a1bc-4a71-82ca-515485a96868","Type":"ContainerDied","Data":"5c7fb3320f7eb4bacfdf689eaebab8d89aee99ed6ba268dd97a2225534862420"} Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.676666 4809 scope.go:117] "RemoveContainer" containerID="18b846f463bf1e1e6f69070b3a68841650fd884e7bd375ecacf1b0ba2fe5d5ba" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.677103 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.723644 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.739389 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.758182 4809 scope.go:117] "RemoveContainer" containerID="3feeb7e0824f63340e6903846f3642f321dc337220b34da7b20a7fd960903db4" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.763295 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:17 crc kubenswrapper[4809]: E0226 14:42:17.764031 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="proxy-httpd" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764058 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="proxy-httpd" Feb 26 14:42:17 crc kubenswrapper[4809]: E0226 14:42:17.764077 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-central-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764085 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-central-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: E0226 14:42:17.764107 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-notification-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764117 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-notification-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: E0226 14:42:17.764149 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="sg-core" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764158 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="sg-core" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764467 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-notification-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764493 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="sg-core" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764519 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="ceilometer-central-agent" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.764636 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" containerName="proxy-httpd" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.767459 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.770623 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.773151 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.786936 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.807366 4809 scope.go:117] "RemoveContainer" containerID="b18454663391459697351affffb42f4a6d09adf7f4b544f9c238dce32e90001a" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.829164 4809 scope.go:117] "RemoveContainer" containerID="8158f171ab00d27cd4a45dbef9b58ae718388d18372b921752b680356ce019e1" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.856614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.856827 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.856925 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.856975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.857264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.857438 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.857475 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.959941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.959990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.960117 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.960177 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.960208 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.960237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.960331 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.961776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.962255 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.965549 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.965623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.966292 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.967808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:17 crc kubenswrapper[4809]: I0226 14:42:17.980539 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282\") pod \"ceilometer-0\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " pod="openstack/ceilometer-0" Feb 26 14:42:18 crc kubenswrapper[4809]: I0226 14:42:18.107623 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:18 crc kubenswrapper[4809]: I0226 14:42:18.286070 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fade2ea0-a1bc-4a71-82ca-515485a96868" path="/var/lib/kubelet/pods/fade2ea0-a1bc-4a71-82ca-515485a96868/volumes" Feb 26 14:42:18 crc kubenswrapper[4809]: I0226 14:42:18.711916 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:18 crc kubenswrapper[4809]: W0226 14:42:18.719270 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a83601f_e223_45c5_8e34_238f83f8c028.slice/crio-35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879 WatchSource:0}: Error finding container 35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879: Status 404 returned error can't find the container with id 35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879 Feb 26 14:42:19 crc kubenswrapper[4809]: I0226 14:42:19.725047 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerStarted","Data":"35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879"} Feb 26 14:42:19 crc kubenswrapper[4809]: I0226 14:42:19.958094 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 14:42:20 crc kubenswrapper[4809]: I0226 14:42:20.704471 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:21 crc kubenswrapper[4809]: I0226 14:42:21.797223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerStarted","Data":"5384b7a15fa5b1b0743e6a9c331659eee59fc3e23b20e17e4b965189614b5e49"} Feb 26 14:42:21 crc kubenswrapper[4809]: I0226 14:42:21.815405 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:21 crc kubenswrapper[4809]: > Feb 26 14:42:21 crc kubenswrapper[4809]: I0226 14:42:21.815494 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:42:21 crc kubenswrapper[4809]: I0226 14:42:21.816511 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b"} pod="openshift-marketplace/redhat-operators-lkxlc" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 26 14:42:21 crc kubenswrapper[4809]: I0226 14:42:21.816552 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" containerID="cri-o://f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b" gracePeriod=30 Feb 26 14:42:22 crc kubenswrapper[4809]: I0226 14:42:22.998595 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:22 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:22 crc kubenswrapper[4809]: > Feb 26 14:42:29 crc kubenswrapper[4809]: I0226 14:42:29.904840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jhstk" event={"ID":"4403ebd6-aa8d-4398-842e-f33ef09117cc","Type":"ContainerStarted","Data":"54700816eda36c542c9266977891ddb0d97193bd82d2a3ee9808db703cf4048d"} Feb 26 14:42:29 crc kubenswrapper[4809]: I0226 14:42:29.909054 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerStarted","Data":"79fd9991e5e2d68987ced9c013cf26f4c44ba2b57a23741e975d08d20f8cdaa1"} Feb 26 14:42:29 crc kubenswrapper[4809]: I0226 14:42:29.909107 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerStarted","Data":"10b9c9d765f774e794cfb3a5b0980f96f1fdeef253f6bd2f14da259e47493751"} Feb 26 14:42:29 crc kubenswrapper[4809]: I0226 14:42:29.934727 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jhstk" podStartSLOduration=3.029596893 podStartE2EDuration="14.934709565s" podCreationTimestamp="2026-02-26 14:42:15 +0000 UTC" firstStartedPulling="2026-02-26 14:42:16.876274615 +0000 UTC m=+1715.349595138" lastFinishedPulling="2026-02-26 14:42:28.781387277 +0000 UTC m=+1727.254707810" observedRunningTime="2026-02-26 14:42:29.921540922 +0000 UTC m=+1728.394861445" watchObservedRunningTime="2026-02-26 14:42:29.934709565 +0000 UTC m=+1728.408030088" Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.013708 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.066578 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvqmb" Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.136230 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvqmb"] Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.274193 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.274779 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nt9t" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="registry-server" containerID="cri-o://1ea9f13f2cdfa3cf44dea40efeb7ee4be4b71ebecc15d355ad6bf613120ac8c9" gracePeriod=2 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.953386 4809 generic.go:334] "Generic (PLEG): container finished" podID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerID="1ea9f13f2cdfa3cf44dea40efeb7ee4be4b71ebecc15d355ad6bf613120ac8c9" exitCode=0 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.953813 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerDied","Data":"1ea9f13f2cdfa3cf44dea40efeb7ee4be4b71ebecc15d355ad6bf613120ac8c9"} Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.953852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nt9t" event={"ID":"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5","Type":"ContainerDied","Data":"38e884104c484c2f93b6a680ec0ec40b27525ae78fd80c51add1156a1face721"} Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.953865 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e884104c484c2f93b6a680ec0ec40b27525ae78fd80c51add1156a1face721" Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.960956 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-central-agent" containerID="cri-o://5384b7a15fa5b1b0743e6a9c331659eee59fc3e23b20e17e4b965189614b5e49" gracePeriod=30 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.961260 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerStarted","Data":"0ab02caa01c8c6e8651a90d4a6889c0fa43a7cd9121c00cdb2faa03b8ff377fb"} Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.961311 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.961695 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="proxy-httpd" containerID="cri-o://0ab02caa01c8c6e8651a90d4a6889c0fa43a7cd9121c00cdb2faa03b8ff377fb" gracePeriod=30 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.961761 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="sg-core" containerID="cri-o://79fd9991e5e2d68987ced9c013cf26f4c44ba2b57a23741e975d08d20f8cdaa1" gracePeriod=30 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.961807 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-notification-agent" containerID="cri-o://10b9c9d765f774e794cfb3a5b0980f96f1fdeef253f6bd2f14da259e47493751" gracePeriod=30 Feb 26 14:42:32 crc kubenswrapper[4809]: I0226 14:42:32.969243 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.001947 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.02389379 podStartE2EDuration="16.001920168s" podCreationTimestamp="2026-02-26 14:42:17 +0000 UTC" firstStartedPulling="2026-02-26 14:42:18.721565881 +0000 UTC m=+1717.194886404" lastFinishedPulling="2026-02-26 14:42:31.699592249 +0000 UTC m=+1730.172912782" observedRunningTime="2026-02-26 14:42:32.989904507 +0000 UTC m=+1731.463225030" watchObservedRunningTime="2026-02-26 14:42:33.001920168 +0000 UTC m=+1731.475240701" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.023855 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content\") pod \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.024205 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities\") pod \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.024255 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c9vg\" (UniqueName: \"kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg\") pod \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\" (UID: \"5b7c3055-ec19-464b-8b36-a0b2ba4f68c5\") " Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.025166 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities" (OuterVolumeSpecName: "utilities") pod "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" (UID: "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.057237 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg" (OuterVolumeSpecName: "kube-api-access-8c9vg") pod "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" (UID: "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5"). InnerVolumeSpecName "kube-api-access-8c9vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.102203 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" (UID: "5b7c3055-ec19-464b-8b36-a0b2ba4f68c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.127240 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.127285 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.127300 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c9vg\" (UniqueName: \"kubernetes.io/projected/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5-kube-api-access-8c9vg\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.973573 4809 generic.go:334] "Generic (PLEG): container finished" podID="1a83601f-e223-45c5-8e34-238f83f8c028" containerID="0ab02caa01c8c6e8651a90d4a6889c0fa43a7cd9121c00cdb2faa03b8ff377fb" exitCode=0 Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.973886 4809 generic.go:334] "Generic (PLEG): container finished" podID="1a83601f-e223-45c5-8e34-238f83f8c028" containerID="79fd9991e5e2d68987ced9c013cf26f4c44ba2b57a23741e975d08d20f8cdaa1" exitCode=2 Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.973898 4809 generic.go:334] "Generic (PLEG): container finished" podID="1a83601f-e223-45c5-8e34-238f83f8c028" containerID="10b9c9d765f774e794cfb3a5b0980f96f1fdeef253f6bd2f14da259e47493751" exitCode=0 Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.973762 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerDied","Data":"0ab02caa01c8c6e8651a90d4a6889c0fa43a7cd9121c00cdb2faa03b8ff377fb"} Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.973993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerDied","Data":"79fd9991e5e2d68987ced9c013cf26f4c44ba2b57a23741e975d08d20f8cdaa1"} Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.974093 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerDied","Data":"10b9c9d765f774e794cfb3a5b0980f96f1fdeef253f6bd2f14da259e47493751"} Feb 26 14:42:33 crc kubenswrapper[4809]: I0226 14:42:33.975064 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nt9t" Feb 26 14:42:34 crc kubenswrapper[4809]: I0226 14:42:34.008441 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:42:34 crc kubenswrapper[4809]: I0226 14:42:34.023223 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nt9t"] Feb 26 14:42:34 crc kubenswrapper[4809]: I0226 14:42:34.269664 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" path="/var/lib/kubelet/pods/5b7c3055-ec19-464b-8b36-a0b2ba4f68c5/volumes" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.018679 4809 generic.go:334] "Generic (PLEG): container finished" podID="1a83601f-e223-45c5-8e34-238f83f8c028" containerID="5384b7a15fa5b1b0743e6a9c331659eee59fc3e23b20e17e4b965189614b5e49" exitCode=0 Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.018869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerDied","Data":"5384b7a15fa5b1b0743e6a9c331659eee59fc3e23b20e17e4b965189614b5e49"} Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.024327 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a83601f-e223-45c5-8e34-238f83f8c028","Type":"ContainerDied","Data":"35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879"} Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.024354 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35ef1df5d4e7db441ab163914dc98a38d674db5d203e45e718ecd9819e9a2879" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.127500 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242515 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242584 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242649 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242676 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242890 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.242955 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.243154 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.243185 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.243658 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle\") pod \"1a83601f-e223-45c5-8e34-238f83f8c028\" (UID: \"1a83601f-e223-45c5-8e34-238f83f8c028\") " Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.244945 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.244971 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a83601f-e223-45c5-8e34-238f83f8c028-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.254450 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282" (OuterVolumeSpecName: "kube-api-access-gk282") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "kube-api-access-gk282". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.286250 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts" (OuterVolumeSpecName: "scripts") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.287594 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.345734 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.348556 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.348596 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.348607 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/1a83601f-e223-45c5-8e34-238f83f8c028-kube-api-access-gk282\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.348621 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.382844 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data" (OuterVolumeSpecName: "config-data") pod "1a83601f-e223-45c5-8e34-238f83f8c028" (UID: "1a83601f-e223-45c5-8e34-238f83f8c028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:38 crc kubenswrapper[4809]: I0226 14:42:38.451387 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a83601f-e223-45c5-8e34-238f83f8c028-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.034209 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.121240 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.141400 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.146733 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147210 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="registry-server" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147227 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="registry-server" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147241 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="extract-utilities" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147248 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="extract-utilities" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147256 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-notification-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147262 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-notification-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147272 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="proxy-httpd" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147277 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="proxy-httpd" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147292 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="sg-core" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147297 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="sg-core" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147309 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="extract-content" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147315 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="extract-content" Feb 26 14:42:39 crc kubenswrapper[4809]: E0226 14:42:39.147345 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-central-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147351 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-central-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147548 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-central-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147565 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7c3055-ec19-464b-8b36-a0b2ba4f68c5" containerName="registry-server" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147576 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="sg-core" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147588 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="ceilometer-notification-agent" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.147602 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" containerName="proxy-httpd" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.149754 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.152750 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.152801 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.162834 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272071 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272246 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272294 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272704 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjd88\" (UniqueName: \"kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272884 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272930 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.272976 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.374826 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjd88\" (UniqueName: \"kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.374916 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.374943 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.374968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.375076 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.375161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.375191 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.376000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.376360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.380242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.380473 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.381759 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.384720 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.397945 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjd88\" (UniqueName: \"kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88\") pod \"ceilometer-0\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.468039 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:42:39 crc kubenswrapper[4809]: I0226 14:42:39.984537 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:42:40 crc kubenswrapper[4809]: I0226 14:42:40.047069 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerStarted","Data":"ce61493cf28cb6432b033e1422394363f9bc16d9089ccd6a408972d042808c08"} Feb 26 14:42:40 crc kubenswrapper[4809]: I0226 14:42:40.270361 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a83601f-e223-45c5-8e34-238f83f8c028" path="/var/lib/kubelet/pods/1a83601f-e223-45c5-8e34-238f83f8c028/volumes" Feb 26 14:42:41 crc kubenswrapper[4809]: I0226 14:42:41.059138 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerStarted","Data":"7cacc812556f836dc908844efbf858db3ac668a0cab8dacd865547954bf6603d"} Feb 26 14:42:41 crc kubenswrapper[4809]: I0226 14:42:41.795330 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:42:41 crc kubenswrapper[4809]: I0226 14:42:41.795832 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:42:43 crc kubenswrapper[4809]: I0226 14:42:43.081580 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerStarted","Data":"a873706770e4266a295709528f29b42bf1ddea948366438c8eefd3720e2d4366"} Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.417059 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-r5jrr"] Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.419113 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.429388 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-e6d8-account-create-update-xhq4l"] Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.431405 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.435117 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.441383 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-r5jrr"] Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.452803 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-e6d8-account-create-update-xhq4l"] Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.503799 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.503877 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.504064 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nh6m\" (UniqueName: \"kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.504128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwnmk\" (UniqueName: \"kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.606739 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.606788 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.606905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nh6m\" (UniqueName: \"kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.606955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwnmk\" (UniqueName: \"kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.607710 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.608034 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.626206 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nh6m\" (UniqueName: \"kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m\") pod \"aodh-e6d8-account-create-update-xhq4l\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.627650 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwnmk\" (UniqueName: \"kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk\") pod \"aodh-db-create-r5jrr\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.740810 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:44 crc kubenswrapper[4809]: I0226 14:42:44.750481 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:45 crc kubenswrapper[4809]: I0226 14:42:45.108182 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerStarted","Data":"bec3a9c7312aad1a4125315d8b0074291e0fa641ebd6763f31359d19e71a3945"} Feb 26 14:42:45 crc kubenswrapper[4809]: W0226 14:42:45.278730 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb24e2cb6_58cc_407b_bc42_5d83d63a173d.slice/crio-338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23 WatchSource:0}: Error finding container 338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23: Status 404 returned error can't find the container with id 338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23 Feb 26 14:42:45 crc kubenswrapper[4809]: I0226 14:42:45.284325 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-r5jrr"] Feb 26 14:42:45 crc kubenswrapper[4809]: I0226 14:42:45.426983 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-e6d8-account-create-update-xhq4l"] Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.123904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-e6d8-account-create-update-xhq4l" event={"ID":"7180709f-48cb-4863-95a6-61637c4508f8","Type":"ContainerStarted","Data":"496d06f2cb6a3d28ee0f975e72c008a207e63b6e165041b5498139f4d7c9ff8b"} Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.124275 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-e6d8-account-create-update-xhq4l" event={"ID":"7180709f-48cb-4863-95a6-61637c4508f8","Type":"ContainerStarted","Data":"aa05803b618f8f9be86acc5c735f7cf35ca9d430f7de57d6bfe94d4a422ec616"} Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.129277 4809 generic.go:334] "Generic (PLEG): container finished" podID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerID="f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b" exitCode=0 Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.129377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b"} Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.129460 4809 scope.go:117] "RemoveContainer" containerID="cf424c393d156754a00a7443486535e6329c3587fea9601165d03dedfdcfdf3d" Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.134320 4809 generic.go:334] "Generic (PLEG): container finished" podID="b24e2cb6-58cc-407b-bc42-5d83d63a173d" containerID="b237b2a8df805508907caa6e357c4762fb3ff39dbf14a3f8eb3d3a1015aaa4b1" exitCode=0 Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.134379 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-r5jrr" event={"ID":"b24e2cb6-58cc-407b-bc42-5d83d63a173d","Type":"ContainerDied","Data":"b237b2a8df805508907caa6e357c4762fb3ff39dbf14a3f8eb3d3a1015aaa4b1"} Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.134415 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-r5jrr" event={"ID":"b24e2cb6-58cc-407b-bc42-5d83d63a173d","Type":"ContainerStarted","Data":"338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23"} Feb 26 14:42:46 crc kubenswrapper[4809]: I0226 14:42:46.147838 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-e6d8-account-create-update-xhq4l" podStartSLOduration=2.147815343 podStartE2EDuration="2.147815343s" podCreationTimestamp="2026-02-26 14:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:46.139338622 +0000 UTC m=+1744.612659145" watchObservedRunningTime="2026-02-26 14:42:46.147815343 +0000 UTC m=+1744.621135866" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.148346 4809 generic.go:334] "Generic (PLEG): container finished" podID="7180709f-48cb-4863-95a6-61637c4508f8" containerID="496d06f2cb6a3d28ee0f975e72c008a207e63b6e165041b5498139f4d7c9ff8b" exitCode=0 Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.148452 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-e6d8-account-create-update-xhq4l" event={"ID":"7180709f-48cb-4863-95a6-61637c4508f8","Type":"ContainerDied","Data":"496d06f2cb6a3d28ee0f975e72c008a207e63b6e165041b5498139f4d7c9ff8b"} Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.152844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerStarted","Data":"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09"} Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.156832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerStarted","Data":"082bd640c401791a3d397d10b7b862631670ae652a869c89275ce108288268c1"} Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.156878 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.218068 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.196541742 podStartE2EDuration="8.218043361s" podCreationTimestamp="2026-02-26 14:42:39 +0000 UTC" firstStartedPulling="2026-02-26 14:42:39.975288165 +0000 UTC m=+1738.448608698" lastFinishedPulling="2026-02-26 14:42:45.996789794 +0000 UTC m=+1744.470110317" observedRunningTime="2026-02-26 14:42:47.210694212 +0000 UTC m=+1745.684014735" watchObservedRunningTime="2026-02-26 14:42:47.218043361 +0000 UTC m=+1745.691363904" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.653939 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.760521 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts\") pod \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.760587 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwnmk\" (UniqueName: \"kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk\") pod \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\" (UID: \"b24e2cb6-58cc-407b-bc42-5d83d63a173d\") " Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.764558 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b24e2cb6-58cc-407b-bc42-5d83d63a173d" (UID: "b24e2cb6-58cc-407b-bc42-5d83d63a173d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.768004 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk" (OuterVolumeSpecName: "kube-api-access-pwnmk") pod "b24e2cb6-58cc-407b-bc42-5d83d63a173d" (UID: "b24e2cb6-58cc-407b-bc42-5d83d63a173d"). InnerVolumeSpecName "kube-api-access-pwnmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.863176 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24e2cb6-58cc-407b-bc42-5d83d63a173d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:47 crc kubenswrapper[4809]: I0226 14:42:47.863225 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwnmk\" (UniqueName: \"kubernetes.io/projected/b24e2cb6-58cc-407b-bc42-5d83d63a173d-kube-api-access-pwnmk\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.186677 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-r5jrr" event={"ID":"b24e2cb6-58cc-407b-bc42-5d83d63a173d","Type":"ContainerDied","Data":"338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23"} Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.187133 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="338f7722478e89848e5fa29539f0468b5ce5223190892ab817d87ea3e3085d23" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.187220 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-r5jrr" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.668707 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.793523 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts\") pod \"7180709f-48cb-4863-95a6-61637c4508f8\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.793694 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nh6m\" (UniqueName: \"kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m\") pod \"7180709f-48cb-4863-95a6-61637c4508f8\" (UID: \"7180709f-48cb-4863-95a6-61637c4508f8\") " Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.794006 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7180709f-48cb-4863-95a6-61637c4508f8" (UID: "7180709f-48cb-4863-95a6-61637c4508f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.795244 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7180709f-48cb-4863-95a6-61637c4508f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.813131 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m" (OuterVolumeSpecName: "kube-api-access-8nh6m") pod "7180709f-48cb-4863-95a6-61637c4508f8" (UID: "7180709f-48cb-4863-95a6-61637c4508f8"). InnerVolumeSpecName "kube-api-access-8nh6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:48 crc kubenswrapper[4809]: I0226 14:42:48.897727 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nh6m\" (UniqueName: \"kubernetes.io/projected/7180709f-48cb-4863-95a6-61637c4508f8-kube-api-access-8nh6m\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.235856 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-e6d8-account-create-update-xhq4l" event={"ID":"7180709f-48cb-4863-95a6-61637c4508f8","Type":"ContainerDied","Data":"aa05803b618f8f9be86acc5c735f7cf35ca9d430f7de57d6bfe94d4a422ec616"} Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.235907 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa05803b618f8f9be86acc5c735f7cf35ca9d430f7de57d6bfe94d4a422ec616" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.235984 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-e6d8-account-create-update-xhq4l" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.237542 4809 generic.go:334] "Generic (PLEG): container finished" podID="4403ebd6-aa8d-4398-842e-f33ef09117cc" containerID="54700816eda36c542c9266977891ddb0d97193bd82d2a3ee9808db703cf4048d" exitCode=0 Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.237592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jhstk" event={"ID":"4403ebd6-aa8d-4398-842e-f33ef09117cc","Type":"ContainerDied","Data":"54700816eda36c542c9266977891ddb0d97193bd82d2a3ee9808db703cf4048d"} Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.783816 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-gjsb5"] Feb 26 14:42:49 crc kubenswrapper[4809]: E0226 14:42:49.784768 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24e2cb6-58cc-407b-bc42-5d83d63a173d" containerName="mariadb-database-create" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.784790 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24e2cb6-58cc-407b-bc42-5d83d63a173d" containerName="mariadb-database-create" Feb 26 14:42:49 crc kubenswrapper[4809]: E0226 14:42:49.784829 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7180709f-48cb-4863-95a6-61637c4508f8" containerName="mariadb-account-create-update" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.784837 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7180709f-48cb-4863-95a6-61637c4508f8" containerName="mariadb-account-create-update" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.785115 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7180709f-48cb-4863-95a6-61637c4508f8" containerName="mariadb-account-create-update" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.785167 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24e2cb6-58cc-407b-bc42-5d83d63a173d" containerName="mariadb-database-create" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.786120 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.789816 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.789949 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6p9fd" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.795383 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.795757 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.820393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.820756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.820928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.820961 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfdpx\" (UniqueName: \"kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.825006 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-gjsb5"] Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.923723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.923763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.923851 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.923871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfdpx\" (UniqueName: \"kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.932210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.934028 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.964740 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:49 crc kubenswrapper[4809]: I0226 14:42:49.979592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfdpx\" (UniqueName: \"kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx\") pod \"aodh-db-sync-gjsb5\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.106540 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.677754 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-gjsb5"] Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.741607 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.743087 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.755299 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.849658 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data\") pod \"4403ebd6-aa8d-4398-842e-f33ef09117cc\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.849794 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle\") pod \"4403ebd6-aa8d-4398-842e-f33ef09117cc\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.849931 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dddtq\" (UniqueName: \"kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq\") pod \"4403ebd6-aa8d-4398-842e-f33ef09117cc\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.850036 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts\") pod \"4403ebd6-aa8d-4398-842e-f33ef09117cc\" (UID: \"4403ebd6-aa8d-4398-842e-f33ef09117cc\") " Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.855363 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq" (OuterVolumeSpecName: "kube-api-access-dddtq") pod "4403ebd6-aa8d-4398-842e-f33ef09117cc" (UID: "4403ebd6-aa8d-4398-842e-f33ef09117cc"). InnerVolumeSpecName "kube-api-access-dddtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.856126 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts" (OuterVolumeSpecName: "scripts") pod "4403ebd6-aa8d-4398-842e-f33ef09117cc" (UID: "4403ebd6-aa8d-4398-842e-f33ef09117cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.886200 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data" (OuterVolumeSpecName: "config-data") pod "4403ebd6-aa8d-4398-842e-f33ef09117cc" (UID: "4403ebd6-aa8d-4398-842e-f33ef09117cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.898833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4403ebd6-aa8d-4398-842e-f33ef09117cc" (UID: "4403ebd6-aa8d-4398-842e-f33ef09117cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.952655 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dddtq\" (UniqueName: \"kubernetes.io/projected/4403ebd6-aa8d-4398-842e-f33ef09117cc-kube-api-access-dddtq\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.952691 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.952702 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:50 crc kubenswrapper[4809]: I0226 14:42:50.952712 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4403ebd6-aa8d-4398-842e-f33ef09117cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.270060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-gjsb5" event={"ID":"8e09189b-a91c-4014-b92b-d8f6bdbd7846","Type":"ContainerStarted","Data":"4f356c55659b67cddd884ad6264717d6763391e6c687395a57426fb632a9bbc4"} Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.272958 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jhstk" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.272894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jhstk" event={"ID":"4403ebd6-aa8d-4398-842e-f33ef09117cc","Type":"ContainerDied","Data":"8986db391a7aa154883de45fbc7f2425b58a5b4db56557392b6dd231a6299e03"} Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.273058 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8986db391a7aa154883de45fbc7f2425b58a5b4db56557392b6dd231a6299e03" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.432903 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 14:42:51 crc kubenswrapper[4809]: E0226 14:42:51.433841 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4403ebd6-aa8d-4398-842e-f33ef09117cc" containerName="nova-cell0-conductor-db-sync" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.433864 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4403ebd6-aa8d-4398-842e-f33ef09117cc" containerName="nova-cell0-conductor-db-sync" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.434163 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4403ebd6-aa8d-4398-842e-f33ef09117cc" containerName="nova-cell0-conductor-db-sync" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.435030 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.441038 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gfcnh" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.441245 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.446062 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.465308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsb8p\" (UniqueName: \"kubernetes.io/projected/535880a7-82d0-47f7-94c1-8c9662d3b32b-kube-api-access-tsb8p\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.465394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.465514 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.567925 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsb8p\" (UniqueName: \"kubernetes.io/projected/535880a7-82d0-47f7-94c1-8c9662d3b32b-kube-api-access-tsb8p\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.568069 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.568264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.573224 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.576324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535880a7-82d0-47f7-94c1-8c9662d3b32b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.597627 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsb8p\" (UniqueName: \"kubernetes.io/projected/535880a7-82d0-47f7-94c1-8c9662d3b32b-kube-api-access-tsb8p\") pod \"nova-cell0-conductor-0\" (UID: \"535880a7-82d0-47f7-94c1-8c9662d3b32b\") " pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.756524 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:51 crc kubenswrapper[4809]: I0226 14:42:51.809717 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:42:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:42:51 crc kubenswrapper[4809]: > Feb 26 14:42:52 crc kubenswrapper[4809]: I0226 14:42:52.375614 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 14:42:53 crc kubenswrapper[4809]: I0226 14:42:53.321641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"535880a7-82d0-47f7-94c1-8c9662d3b32b","Type":"ContainerStarted","Data":"25e631a6511e91029c9f0f30340e057e23e3c930e5b71045a528aa2d0a7a8df1"} Feb 26 14:42:53 crc kubenswrapper[4809]: I0226 14:42:53.322097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"535880a7-82d0-47f7-94c1-8c9662d3b32b","Type":"ContainerStarted","Data":"5bc0cfed685b4845d7ab9e3cfb6b16e5d3a9537e59d988570644f390670e6319"} Feb 26 14:42:53 crc kubenswrapper[4809]: I0226 14:42:53.322125 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 14:42:53 crc kubenswrapper[4809]: I0226 14:42:53.349031 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.348986648 podStartE2EDuration="2.348986648s" podCreationTimestamp="2026-02-26 14:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:42:53.342957147 +0000 UTC m=+1751.816277670" watchObservedRunningTime="2026-02-26 14:42:53.348986648 +0000 UTC m=+1751.822307181" Feb 26 14:42:57 crc kubenswrapper[4809]: I0226 14:42:57.371303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-gjsb5" event={"ID":"8e09189b-a91c-4014-b92b-d8f6bdbd7846","Type":"ContainerStarted","Data":"a7ac3d8450e007489498245c64f81d771c859903399d2a8df5eb43d65ecc1558"} Feb 26 14:42:57 crc kubenswrapper[4809]: I0226 14:42:57.390920 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-gjsb5" podStartSLOduration=2.638433876 podStartE2EDuration="8.390900997s" podCreationTimestamp="2026-02-26 14:42:49 +0000 UTC" firstStartedPulling="2026-02-26 14:42:50.677600694 +0000 UTC m=+1749.150921217" lastFinishedPulling="2026-02-26 14:42:56.430067815 +0000 UTC m=+1754.903388338" observedRunningTime="2026-02-26 14:42:57.384800774 +0000 UTC m=+1755.858121297" watchObservedRunningTime="2026-02-26 14:42:57.390900997 +0000 UTC m=+1755.864221520" Feb 26 14:43:01 crc kubenswrapper[4809]: I0226 14:43:01.415606 4809 generic.go:334] "Generic (PLEG): container finished" podID="8e09189b-a91c-4014-b92b-d8f6bdbd7846" containerID="a7ac3d8450e007489498245c64f81d771c859903399d2a8df5eb43d65ecc1558" exitCode=0 Feb 26 14:43:01 crc kubenswrapper[4809]: I0226 14:43:01.415798 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-gjsb5" event={"ID":"8e09189b-a91c-4014-b92b-d8f6bdbd7846","Type":"ContainerDied","Data":"a7ac3d8450e007489498245c64f81d771c859903399d2a8df5eb43d65ecc1558"} Feb 26 14:43:01 crc kubenswrapper[4809]: I0226 14:43:01.788360 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 14:43:01 crc kubenswrapper[4809]: I0226 14:43:01.813957 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:43:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:43:01 crc kubenswrapper[4809]: > Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.468108 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2vl8t"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.470513 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.475529 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.481780 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.490660 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.490848 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.490886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.490914 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmtn\" (UniqueName: \"kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.491592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2vl8t"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.594398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.594829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.594856 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.594871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmtn\" (UniqueName: \"kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.621978 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.632618 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.640844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.683687 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmtn\" (UniqueName: \"kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn\") pod \"nova-cell0-cell-mapping-2vl8t\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.747148 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.750147 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.766972 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.799771 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9x9\" (UniqueName: \"kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.799830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.800075 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.807218 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.819546 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.856223 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.858733 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.870629 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.893416 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.895850 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.904903 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.904943 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xkk\" (UniqueName: \"kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.904989 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905138 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905160 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcb29\" (UniqueName: \"kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905217 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9x9\" (UniqueName: \"kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.905236 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.906615 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.944515 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9x9\" (UniqueName: \"kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.945778 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.948066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data\") pod \"nova-scheduler-0\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:02 crc kubenswrapper[4809]: I0226 14:43:02.982501 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.023595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.023670 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xkk\" (UniqueName: \"kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.023796 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.023832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.024127 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.024168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcb29\" (UniqueName: \"kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.024210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.038529 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.056535 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.056824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.073392 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcb29\" (UniqueName: \"kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29\") pod \"nova-api-0\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.074737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.085636 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xkk\" (UniqueName: \"kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.109635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.110056 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.153953 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.158273 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.241319 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.255592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.274444 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:03 crc kubenswrapper[4809]: E0226 14:43:03.275447 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e09189b-a91c-4014-b92b-d8f6bdbd7846" containerName="aodh-db-sync" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.275481 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09189b-a91c-4014-b92b-d8f6bdbd7846" containerName="aodh-db-sync" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.275765 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e09189b-a91c-4014-b92b-d8f6bdbd7846" containerName="aodh-db-sync" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.278961 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.283513 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.305240 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.341367 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts\") pod \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.343472 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data\") pod \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.343683 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle\") pod \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.343837 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfdpx\" (UniqueName: \"kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx\") pod \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\" (UID: \"8e09189b-a91c-4014-b92b-d8f6bdbd7846\") " Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.360062 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx" (OuterVolumeSpecName: "kube-api-access-bfdpx") pod "8e09189b-a91c-4014-b92b-d8f6bdbd7846" (UID: "8e09189b-a91c-4014-b92b-d8f6bdbd7846"). InnerVolumeSpecName "kube-api-access-bfdpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.383691 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts" (OuterVolumeSpecName: "scripts") pod "8e09189b-a91c-4014-b92b-d8f6bdbd7846" (UID: "8e09189b-a91c-4014-b92b-d8f6bdbd7846"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.426163 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data" (OuterVolumeSpecName: "config-data") pod "8e09189b-a91c-4014-b92b-d8f6bdbd7846" (UID: "8e09189b-a91c-4014-b92b-d8f6bdbd7846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.427226 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.433147 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.459507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.459671 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.459729 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.460055 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2tx\" (UniqueName: \"kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.460218 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfdpx\" (UniqueName: \"kubernetes.io/projected/8e09189b-a91c-4014-b92b-d8f6bdbd7846-kube-api-access-bfdpx\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.460228 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.460236 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.462473 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.510087 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e09189b-a91c-4014-b92b-d8f6bdbd7846" (UID: "8e09189b-a91c-4014-b92b-d8f6bdbd7846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.515513 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-gjsb5" event={"ID":"8e09189b-a91c-4014-b92b-d8f6bdbd7846","Type":"ContainerDied","Data":"4f356c55659b67cddd884ad6264717d6763391e6c687395a57426fb632a9bbc4"} Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.515547 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f356c55659b67cddd884ad6264717d6763391e6c687395a57426fb632a9bbc4" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.515597 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-gjsb5" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564350 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564430 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564556 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564924 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.564980 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.565027 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.565276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.565509 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fk4\" (UniqueName: \"kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.565553 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2tx\" (UniqueName: \"kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.565861 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e09189b-a91c-4014-b92b-d8f6bdbd7846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.568233 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.572652 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.574926 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.610813 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2tx\" (UniqueName: \"kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx\") pod \"nova-metadata-0\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.680110 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.680299 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.680396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.680647 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.681116 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.681317 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fk4\" (UniqueName: \"kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.681626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.682356 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.682909 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.683277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.686863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.706267 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fk4\" (UniqueName: \"kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4\") pod \"dnsmasq-dns-5fbc4d444f-6v92w\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.798663 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.838946 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2vl8t"] Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.907140 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:03 crc kubenswrapper[4809]: I0226 14:43:03.968990 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.035318 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7nvf"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.037614 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.059647 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7nvf"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.065249 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.065611 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.156488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.157041 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.157079 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.157188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqpj\" (UniqueName: \"kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.204107 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.229330 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.260978 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqpj\" (UniqueName: \"kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.262971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.263296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.263330 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.275140 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.276006 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.279831 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.294791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqpj\" (UniqueName: \"kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj\") pod \"nova-cell1-conductor-db-sync-j7nvf\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.393901 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.492364 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.496107 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.498719 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.498929 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6p9fd" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.499150 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.507956 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.576284 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2vl8t" event={"ID":"538bc3b6-9ed2-48da-8596-55ca0077a9df","Type":"ContainerStarted","Data":"c84b1b20373054ecd2e5a080b4188a6fdd14c7eda0fc36a5ee78774475e05e62"} Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.576345 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2vl8t" event={"ID":"538bc3b6-9ed2-48da-8596-55ca0077a9df","Type":"ContainerStarted","Data":"bbe98ba47c4ad446d0a9ce1aa72ce3068eb95b7e49eb4186413a4920b663c5dc"} Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.588914 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerStarted","Data":"7b4cb95c92c8df499a6cde48c33bc7d55a4ee34de17f83afa4622a45757aceba"} Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.590495 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.590556 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.590625 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qnkd\" (UniqueName: \"kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.590654 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.604410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0216ea6d-d0cc-423f-81e0-a49c95148181","Type":"ContainerStarted","Data":"a1e71690ede76954b43d2fd94b4db24f0cdcfdde9d36dc0baae3ff7cf0c360dc"} Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.605588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a480769-4696-4ae8-a896-af49f97b10c9","Type":"ContainerStarted","Data":"73c1daa8286e53cf7c6ac8ea0668e29be3408ad72df390ad45ee3d22369af2cc"} Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.606509 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.664361 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2vl8t" podStartSLOduration=2.664337524 podStartE2EDuration="2.664337524s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:04.632517681 +0000 UTC m=+1763.105838224" watchObservedRunningTime="2026-02-26 14:43:04.664337524 +0000 UTC m=+1763.137658047" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.696658 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.696745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.696813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qnkd\" (UniqueName: \"kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.696844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.707527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.733814 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.744511 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qnkd\" (UniqueName: \"kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.751457 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts\") pod \"aodh-0\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " pod="openstack/aodh-0" Feb 26 14:43:04 crc kubenswrapper[4809]: I0226 14:43:04.872213 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.080914 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.263124 4809 scope.go:117] "RemoveContainer" containerID="1ea9f13f2cdfa3cf44dea40efeb7ee4be4b71ebecc15d355ad6bf613120ac8c9" Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.375200 4809 scope.go:117] "RemoveContainer" containerID="3a20fc9d66580aec6dc9da167a444997e614434050761430b9ed79cd635d5290" Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.448562 4809 scope.go:117] "RemoveContainer" containerID="702799a5674cf93832e8a9e68ac5b3407f140c8daa635011a57253842b538ec8" Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.522556 4809 scope.go:117] "RemoveContainer" containerID="3d97f6830958b6b69f453794d44eda14b7e134a1ba5b744b593848f3558bddac" Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.572065 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7nvf"] Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.671083 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.673335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerStarted","Data":"a14cbcd42abbc4c9821fea0c82ed00d6082a7aa633259f61627aa65a89e99354"} Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.725423 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" event={"ID":"3b753f14-9d84-40a0-963f-233a8d25d27f","Type":"ContainerStarted","Data":"7ef60a9e713e5414ba020cfbd8d96589f40d7762069e18c62d0a041f1ffb3db5"} Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.761650 4809 generic.go:334] "Generic (PLEG): container finished" podID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerID="86ac92af8096abe44e43e0472db00406a608518de672143971c03f1700c6df93" exitCode=0 Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.761958 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" event={"ID":"58e7e511-833d-49d6-bff6-d490bf3293d0","Type":"ContainerDied","Data":"86ac92af8096abe44e43e0472db00406a608518de672143971c03f1700c6df93"} Feb 26 14:43:05 crc kubenswrapper[4809]: I0226 14:43:05.761992 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" event={"ID":"58e7e511-833d-49d6-bff6-d490bf3293d0","Type":"ContainerStarted","Data":"e1a03fa39295a9cae43bf6981ccfdca5d466e908350f862bd2aad6070437928a"} Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.791517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerStarted","Data":"0690e42bfcc9d3d668e747cf189e986d203e92bc300dd364631b8e1dc86d9a04"} Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.796598 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" event={"ID":"58e7e511-833d-49d6-bff6-d490bf3293d0","Type":"ContainerStarted","Data":"0901168ef5f0487898a833ef432b345526a0ca2a40f9139ca62318a7c5e09b6a"} Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.796763 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.803825 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" event={"ID":"3b753f14-9d84-40a0-963f-233a8d25d27f","Type":"ContainerStarted","Data":"249b416ff52e64602436ff0aec2ff70da8ee80718fe6eda9ef8905c72730ab94"} Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.838128 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" podStartSLOduration=4.838106239 podStartE2EDuration="4.838106239s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:06.822446464 +0000 UTC m=+1765.295766987" watchObservedRunningTime="2026-02-26 14:43:06.838106239 +0000 UTC m=+1765.311426762" Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.864004 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" podStartSLOduration=2.8639793730000003 podStartE2EDuration="2.863979373s" podCreationTimestamp="2026-02-26 14:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:06.846228439 +0000 UTC m=+1765.319548962" watchObservedRunningTime="2026-02-26 14:43:06.863979373 +0000 UTC m=+1765.337299896" Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.903071 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:06 crc kubenswrapper[4809]: I0226 14:43:06.924556 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.814479 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.815161 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-central-agent" containerID="cri-o://7cacc812556f836dc908844efbf858db3ac668a0cab8dacd865547954bf6603d" gracePeriod=30 Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.815283 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" containerID="cri-o://082bd640c401791a3d397d10b7b862631670ae652a869c89275ce108288268c1" gracePeriod=30 Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.815315 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="sg-core" containerID="cri-o://bec3a9c7312aad1a4125315d8b0074291e0fa641ebd6763f31359d19e71a3945" gracePeriod=30 Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.815344 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-notification-agent" containerID="cri-o://a873706770e4266a295709528f29b42bf1ddea948366438c8eefd3720e2d4366" gracePeriod=30 Feb 26 14:43:08 crc kubenswrapper[4809]: I0226 14:43:08.850075 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": EOF" Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.469403 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": dial tcp 10.217.0.242:3000: connect: connection refused" Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.850887 4809 generic.go:334] "Generic (PLEG): container finished" podID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerID="082bd640c401791a3d397d10b7b862631670ae652a869c89275ce108288268c1" exitCode=0 Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.851256 4809 generic.go:334] "Generic (PLEG): container finished" podID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerID="bec3a9c7312aad1a4125315d8b0074291e0fa641ebd6763f31359d19e71a3945" exitCode=2 Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.851268 4809 generic.go:334] "Generic (PLEG): container finished" podID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerID="7cacc812556f836dc908844efbf858db3ac668a0cab8dacd865547954bf6603d" exitCode=0 Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.851291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerDied","Data":"082bd640c401791a3d397d10b7b862631670ae652a869c89275ce108288268c1"} Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.851323 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerDied","Data":"bec3a9c7312aad1a4125315d8b0074291e0fa641ebd6763f31359d19e71a3945"} Feb 26 14:43:09 crc kubenswrapper[4809]: I0226 14:43:09.851335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerDied","Data":"7cacc812556f836dc908844efbf858db3ac668a0cab8dacd865547954bf6603d"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.874240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerStarted","Data":"417806a747da43a16eb3d17fce637bacc50ed296c43c169e5ed69d0c0f46899c"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.874710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerStarted","Data":"04ba4c89c7729edd881a9d5e92a4a15e256c52eecedd306d1d920ecd10da6a1f"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.874862 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-log" containerID="cri-o://04ba4c89c7729edd881a9d5e92a4a15e256c52eecedd306d1d920ecd10da6a1f" gracePeriod=30 Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.874938 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-metadata" containerID="cri-o://417806a747da43a16eb3d17fce637bacc50ed296c43c169e5ed69d0c0f46899c" gracePeriod=30 Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.877697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a480769-4696-4ae8-a896-af49f97b10c9","Type":"ContainerStarted","Data":"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.883697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerStarted","Data":"7cd9decc2cbdf3de17889378131bd5e9c4de3ea0e28f8018dcd29809b304c4d2"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.893128 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerStarted","Data":"fdc15ec388008b450b455be3e5f045edcba6b7d8ab726b1a32d3f448b3337b13"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.893183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerStarted","Data":"5d28e81af027b225f91425c6f02f5fb20e2d32a5542361e23d20658f1d2a42fc"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.900647 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0216ea6d-d0cc-423f-81e0-a49c95148181","Type":"ContainerStarted","Data":"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3"} Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.900816 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0216ea6d-d0cc-423f-81e0-a49c95148181" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3" gracePeriod=30 Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.903679 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.422670702 podStartE2EDuration="8.903664749s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="2026-02-26 14:43:05.033111316 +0000 UTC m=+1763.506431839" lastFinishedPulling="2026-02-26 14:43:09.514105363 +0000 UTC m=+1767.987425886" observedRunningTime="2026-02-26 14:43:10.894590032 +0000 UTC m=+1769.367910555" watchObservedRunningTime="2026-02-26 14:43:10.903664749 +0000 UTC m=+1769.376985272" Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.936811 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.543246712 podStartE2EDuration="8.93679412s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="2026-02-26 14:43:04.076777042 +0000 UTC m=+1762.550097565" lastFinishedPulling="2026-02-26 14:43:09.47032445 +0000 UTC m=+1767.943644973" observedRunningTime="2026-02-26 14:43:10.919943561 +0000 UTC m=+1769.393264094" watchObservedRunningTime="2026-02-26 14:43:10.93679412 +0000 UTC m=+1769.410114643" Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.960163 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.6389941009999998 podStartE2EDuration="8.960135863s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="2026-02-26 14:43:04.211559589 +0000 UTC m=+1762.684880112" lastFinishedPulling="2026-02-26 14:43:09.532701351 +0000 UTC m=+1768.006021874" observedRunningTime="2026-02-26 14:43:10.941441132 +0000 UTC m=+1769.414761665" watchObservedRunningTime="2026-02-26 14:43:10.960135863 +0000 UTC m=+1769.433456376" Feb 26 14:43:10 crc kubenswrapper[4809]: I0226 14:43:10.968722 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.871719549 podStartE2EDuration="8.968700656s" podCreationTimestamp="2026-02-26 14:43:02 +0000 UTC" firstStartedPulling="2026-02-26 14:43:04.279786946 +0000 UTC m=+1762.753107469" lastFinishedPulling="2026-02-26 14:43:09.376768053 +0000 UTC m=+1767.850088576" observedRunningTime="2026-02-26 14:43:10.961340087 +0000 UTC m=+1769.434660620" watchObservedRunningTime="2026-02-26 14:43:10.968700656 +0000 UTC m=+1769.442021179" Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.092532 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.793739 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.794149 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.794204 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.795456 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.795532 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" gracePeriod=600 Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.815231 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:43:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:43:11 crc kubenswrapper[4809]: > Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.929983 4809 generic.go:334] "Generic (PLEG): container finished" podID="1243c611-5f78-4234-8c54-b9999a2fd507" containerID="417806a747da43a16eb3d17fce637bacc50ed296c43c169e5ed69d0c0f46899c" exitCode=0 Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.930325 4809 generic.go:334] "Generic (PLEG): container finished" podID="1243c611-5f78-4234-8c54-b9999a2fd507" containerID="04ba4c89c7729edd881a9d5e92a4a15e256c52eecedd306d1d920ecd10da6a1f" exitCode=143 Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.930394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerDied","Data":"417806a747da43a16eb3d17fce637bacc50ed296c43c169e5ed69d0c0f46899c"} Feb 26 14:43:11 crc kubenswrapper[4809]: I0226 14:43:11.930438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerDied","Data":"04ba4c89c7729edd881a9d5e92a4a15e256c52eecedd306d1d920ecd10da6a1f"} Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.148535 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.154305 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.154774 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.160278 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.177916 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.178232 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.217650 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" exitCode=0 Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.221203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155"} Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.221315 4809 scope.go:117] "RemoveContainer" containerID="56dbb7b410f3314a8d0d4d19c41ad3338a19ccab03e1e83161a98fc698033ce0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.222844 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/opa namespace/openshift-logging: Readiness probe status=failure output="" start-of-body= Feb 26 14:43:13 crc kubenswrapper[4809]: E0226 14:43:13.526498 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.769631 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.819628 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.897063 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.918160 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:43:13 crc kubenswrapper[4809]: I0226 14:43:13.918461 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="dnsmasq-dns" containerID="cri-o://22e799992c42e9ec952a01a52a059a4590d1eeee391ae6e138b67a0c84af4deb" gracePeriod=10 Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.019840 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle\") pod \"1243c611-5f78-4234-8c54-b9999a2fd507\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.020246 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd2tx\" (UniqueName: \"kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx\") pod \"1243c611-5f78-4234-8c54-b9999a2fd507\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.020329 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data\") pod \"1243c611-5f78-4234-8c54-b9999a2fd507\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.020476 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs\") pod \"1243c611-5f78-4234-8c54-b9999a2fd507\" (UID: \"1243c611-5f78-4234-8c54-b9999a2fd507\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.025455 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs" (OuterVolumeSpecName: "logs") pod "1243c611-5f78-4234-8c54-b9999a2fd507" (UID: "1243c611-5f78-4234-8c54-b9999a2fd507"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.042546 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx" (OuterVolumeSpecName: "kube-api-access-sd2tx") pod "1243c611-5f78-4234-8c54-b9999a2fd507" (UID: "1243c611-5f78-4234-8c54-b9999a2fd507"). InnerVolumeSpecName "kube-api-access-sd2tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.122895 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1243c611-5f78-4234-8c54-b9999a2fd507" (UID: "1243c611-5f78-4234-8c54-b9999a2fd507"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.123670 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.123692 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd2tx\" (UniqueName: \"kubernetes.io/projected/1243c611-5f78-4234-8c54-b9999a2fd507-kube-api-access-sd2tx\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.123707 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1243c611-5f78-4234-8c54-b9999a2fd507-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.180689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data" (OuterVolumeSpecName: "config-data") pod "1243c611-5f78-4234-8c54-b9999a2fd507" (UID: "1243c611-5f78-4234-8c54-b9999a2fd507"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.226811 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1243c611-5f78-4234-8c54-b9999a2fd507-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.250270 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.250691 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.255102 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:43:14 crc kubenswrapper[4809]: E0226 14:43:14.259963 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.292254 4809 generic.go:334] "Generic (PLEG): container finished" podID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerID="22e799992c42e9ec952a01a52a059a4590d1eeee391ae6e138b67a0c84af4deb" exitCode=0 Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.298537 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" event={"ID":"56890ecc-238d-4b33-b0cc-67c8a5831266","Type":"ContainerDied","Data":"22e799992c42e9ec952a01a52a059a4590d1eeee391ae6e138b67a0c84af4deb"} Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.301867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1243c611-5f78-4234-8c54-b9999a2fd507","Type":"ContainerDied","Data":"a14cbcd42abbc4c9821fea0c82ed00d6082a7aa633259f61627aa65a89e99354"} Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.302005 4809 scope.go:117] "RemoveContainer" containerID="417806a747da43a16eb3d17fce637bacc50ed296c43c169e5ed69d0c0f46899c" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.302215 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.327875 4809 generic.go:334] "Generic (PLEG): container finished" podID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerID="a873706770e4266a295709528f29b42bf1ddea948366438c8eefd3720e2d4366" exitCode=0 Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.327939 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerDied","Data":"a873706770e4266a295709528f29b42bf1ddea948366438c8eefd3720e2d4366"} Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.327963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8153a36e-95ad-46b4-9c04-4c5aaefafe93","Type":"ContainerDied","Data":"ce61493cf28cb6432b033e1422394363f9bc16d9089ccd6a408972d042808c08"} Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.327973 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce61493cf28cb6432b033e1422394363f9bc16d9089ccd6a408972d042808c08" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.342593 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerStarted","Data":"021bdf02551950e5c37f9405c0dea2b67a12516227aed0da5d55df9ae1d7bb09"} Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.348701 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.400899 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.402139 4809 scope.go:117] "RemoveContainer" containerID="04ba4c89c7729edd881a9d5e92a4a15e256c52eecedd306d1d920ecd10da6a1f" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.437863 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.437981 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjd88\" (UniqueName: \"kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.438170 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.438260 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.438284 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.438368 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.438432 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml\") pod \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\" (UID: \"8153a36e-95ad-46b4-9c04-4c5aaefafe93\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.443482 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.453815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.463182 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts" (OuterVolumeSpecName: "scripts") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.487109 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88" (OuterVolumeSpecName: "kube-api-access-cjd88") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "kube-api-access-cjd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.502448 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.545582 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjd88\" (UniqueName: \"kubernetes.io/projected/8153a36e-95ad-46b4-9c04-4c5aaefafe93-kube-api-access-cjd88\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.545612 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.545621 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.545629 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8153a36e-95ad-46b4-9c04-4c5aaefafe93-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.545638 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.665366 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.685239 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data" (OuterVolumeSpecName: "config-data") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.709367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8153a36e-95ad-46b4-9c04-4c5aaefafe93" (UID: "8153a36e-95ad-46b4-9c04-4c5aaefafe93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.750236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.750642 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.751048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.751125 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7h69\" (UniqueName: \"kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.751263 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.751316 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb\") pod \"56890ecc-238d-4b33-b0cc-67c8a5831266\" (UID: \"56890ecc-238d-4b33-b0cc-67c8a5831266\") " Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.752003 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.752034 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8153a36e-95ad-46b4-9c04-4c5aaefafe93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.768310 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69" (OuterVolumeSpecName: "kube-api-access-b7h69") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "kube-api-access-b7h69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.819543 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config" (OuterVolumeSpecName: "config") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.846250 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.855283 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.855307 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7h69\" (UniqueName: \"kubernetes.io/projected/56890ecc-238d-4b33-b0cc-67c8a5831266-kube-api-access-b7h69\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.855315 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.877527 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.878871 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.886972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56890ecc-238d-4b33-b0cc-67c8a5831266" (UID: "56890ecc-238d-4b33-b0cc-67c8a5831266"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.957947 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.957990 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:14 crc kubenswrapper[4809]: I0226 14:43:14.958004 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56890ecc-238d-4b33-b0cc-67c8a5831266-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.373428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" event={"ID":"56890ecc-238d-4b33-b0cc-67c8a5831266","Type":"ContainerDied","Data":"8d26a9eaf2f1e569580620f0481e333e552cb9aa5d2661ea671f6c326c11e841"} Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.373483 4809 scope.go:117] "RemoveContainer" containerID="22e799992c42e9ec952a01a52a059a4590d1eeee391ae6e138b67a0c84af4deb" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.373665 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-xqg8z" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.383129 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.437080 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.458229 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-xqg8z"] Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.470745 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.485939 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503090 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503600 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="sg-core" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503621 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="sg-core" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503638 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-central-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503646 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-central-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503663 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-metadata" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503668 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-metadata" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503686 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="init" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503693 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="init" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503702 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-log" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503708 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-log" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503727 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="dnsmasq-dns" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503733 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="dnsmasq-dns" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503745 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503750 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" Feb 26 14:43:15 crc kubenswrapper[4809]: E0226 14:43:15.503772 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-notification-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503778 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-notification-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.503998 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-metadata" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504027 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" containerName="nova-metadata-log" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504046 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-central-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504059 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="proxy-httpd" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504071 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="ceilometer-notification-agent" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504079 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" containerName="dnsmasq-dns" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.504088 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" containerName="sg-core" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.506149 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.509454 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.509493 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.524839 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.612972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74hv\" (UniqueName: \"kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613163 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613228 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613273 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613301 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.613417 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.715676 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.715810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.715905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.715943 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.716128 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.716164 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.716274 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74hv\" (UniqueName: \"kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.716771 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.716810 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.724498 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.724629 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.724765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.725288 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.739991 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74hv\" (UniqueName: \"kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv\") pod \"ceilometer-0\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " pod="openstack/ceilometer-0" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.787624 4809 scope.go:117] "RemoveContainer" containerID="5a1df1317b4f209bb4f84b463b5a8ae3077a521d95e1d0265f7abcbb28f891aa" Feb 26 14:43:15 crc kubenswrapper[4809]: I0226 14:43:15.827218 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:16 crc kubenswrapper[4809]: I0226 14:43:16.299885 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56890ecc-238d-4b33-b0cc-67c8a5831266" path="/var/lib/kubelet/pods/56890ecc-238d-4b33-b0cc-67c8a5831266/volumes" Feb 26 14:43:16 crc kubenswrapper[4809]: I0226 14:43:16.301140 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8153a36e-95ad-46b4-9c04-4c5aaefafe93" path="/var/lib/kubelet/pods/8153a36e-95ad-46b4-9c04-4c5aaefafe93/volumes" Feb 26 14:43:16 crc kubenswrapper[4809]: I0226 14:43:16.400480 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerStarted","Data":"f9e77e35e2fa4212caa6c73f7aa837348389e623db63f37ca0042dd6d275143a"} Feb 26 14:43:16 crc kubenswrapper[4809]: I0226 14:43:16.450452 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:17 crc kubenswrapper[4809]: I0226 14:43:17.417998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerStarted","Data":"57185d8f86f4649567477dbffb22a0aab42067cf3d4ab71838b2b965b1ddf54a"} Feb 26 14:43:18 crc kubenswrapper[4809]: I0226 14:43:18.430887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerStarted","Data":"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9"} Feb 26 14:43:18 crc kubenswrapper[4809]: I0226 14:43:18.433156 4809 generic.go:334] "Generic (PLEG): container finished" podID="538bc3b6-9ed2-48da-8596-55ca0077a9df" containerID="c84b1b20373054ecd2e5a080b4188a6fdd14c7eda0fc36a5ee78774475e05e62" exitCode=0 Feb 26 14:43:18 crc kubenswrapper[4809]: I0226 14:43:18.433210 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2vl8t" event={"ID":"538bc3b6-9ed2-48da-8596-55ca0077a9df","Type":"ContainerDied","Data":"c84b1b20373054ecd2e5a080b4188a6fdd14c7eda0fc36a5ee78774475e05e62"} Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.453112 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerStarted","Data":"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f"} Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.463336 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerStarted","Data":"d0847c06530c3fdf0c089b6873b7a912ef34379d9ef4c7b28680d2274e9507c1"} Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.463530 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-api" containerID="cri-o://7cd9decc2cbdf3de17889378131bd5e9c4de3ea0e28f8018dcd29809b304c4d2" gracePeriod=30 Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.464311 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-listener" containerID="cri-o://d0847c06530c3fdf0c089b6873b7a912ef34379d9ef4c7b28680d2274e9507c1" gracePeriod=30 Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.464364 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-notifier" containerID="cri-o://f9e77e35e2fa4212caa6c73f7aa837348389e623db63f37ca0042dd6d275143a" gracePeriod=30 Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.464417 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-evaluator" containerID="cri-o://021bdf02551950e5c37f9405c0dea2b67a12516227aed0da5d55df9ae1d7bb09" gracePeriod=30 Feb 26 14:43:19 crc kubenswrapper[4809]: I0226 14:43:19.492067 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.773114633 podStartE2EDuration="15.492050304s" podCreationTimestamp="2026-02-26 14:43:04 +0000 UTC" firstStartedPulling="2026-02-26 14:43:05.744478655 +0000 UTC m=+1764.217799178" lastFinishedPulling="2026-02-26 14:43:18.463414326 +0000 UTC m=+1776.936734849" observedRunningTime="2026-02-26 14:43:19.490120109 +0000 UTC m=+1777.963440632" watchObservedRunningTime="2026-02-26 14:43:19.492050304 +0000 UTC m=+1777.965370827" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.109206 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.160363 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts\") pod \"538bc3b6-9ed2-48da-8596-55ca0077a9df\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.160553 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle\") pod \"538bc3b6-9ed2-48da-8596-55ca0077a9df\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.160633 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmtn\" (UniqueName: \"kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn\") pod \"538bc3b6-9ed2-48da-8596-55ca0077a9df\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.160666 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data\") pod \"538bc3b6-9ed2-48da-8596-55ca0077a9df\" (UID: \"538bc3b6-9ed2-48da-8596-55ca0077a9df\") " Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.197388 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn" (OuterVolumeSpecName: "kube-api-access-6mmtn") pod "538bc3b6-9ed2-48da-8596-55ca0077a9df" (UID: "538bc3b6-9ed2-48da-8596-55ca0077a9df"). InnerVolumeSpecName "kube-api-access-6mmtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.201990 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts" (OuterVolumeSpecName: "scripts") pod "538bc3b6-9ed2-48da-8596-55ca0077a9df" (UID: "538bc3b6-9ed2-48da-8596-55ca0077a9df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.237833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "538bc3b6-9ed2-48da-8596-55ca0077a9df" (UID: "538bc3b6-9ed2-48da-8596-55ca0077a9df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.264154 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.264193 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.264210 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mmtn\" (UniqueName: \"kubernetes.io/projected/538bc3b6-9ed2-48da-8596-55ca0077a9df-kube-api-access-6mmtn\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.264457 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data" (OuterVolumeSpecName: "config-data") pod "538bc3b6-9ed2-48da-8596-55ca0077a9df" (UID: "538bc3b6-9ed2-48da-8596-55ca0077a9df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.366683 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/538bc3b6-9ed2-48da-8596-55ca0077a9df-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.476149 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2vl8t" event={"ID":"538bc3b6-9ed2-48da-8596-55ca0077a9df","Type":"ContainerDied","Data":"bbe98ba47c4ad446d0a9ce1aa72ce3068eb95b7e49eb4186413a4920b663c5dc"} Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.476178 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2vl8t" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.476203 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbe98ba47c4ad446d0a9ce1aa72ce3068eb95b7e49eb4186413a4920b663c5dc" Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479284 4809 generic.go:334] "Generic (PLEG): container finished" podID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerID="f9e77e35e2fa4212caa6c73f7aa837348389e623db63f37ca0042dd6d275143a" exitCode=0 Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479307 4809 generic.go:334] "Generic (PLEG): container finished" podID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerID="021bdf02551950e5c37f9405c0dea2b67a12516227aed0da5d55df9ae1d7bb09" exitCode=0 Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479316 4809 generic.go:334] "Generic (PLEG): container finished" podID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerID="7cd9decc2cbdf3de17889378131bd5e9c4de3ea0e28f8018dcd29809b304c4d2" exitCode=0 Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479353 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerDied","Data":"f9e77e35e2fa4212caa6c73f7aa837348389e623db63f37ca0042dd6d275143a"} Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerDied","Data":"021bdf02551950e5c37f9405c0dea2b67a12516227aed0da5d55df9ae1d7bb09"} Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.479381 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerDied","Data":"7cd9decc2cbdf3de17889378131bd5e9c4de3ea0e28f8018dcd29809b304c4d2"} Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.482623 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerStarted","Data":"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783"} Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.649300 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.649589 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-log" containerID="cri-o://5d28e81af027b225f91425c6f02f5fb20e2d32a5542361e23d20658f1d2a42fc" gracePeriod=30 Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.649682 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-api" containerID="cri-o://fdc15ec388008b450b455be3e5f045edcba6b7d8ab726b1a32d3f448b3337b13" gracePeriod=30 Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.672086 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:20 crc kubenswrapper[4809]: I0226 14:43:20.672312 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" containerName="nova-scheduler-scheduler" containerID="cri-o://79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" gracePeriod=30 Feb 26 14:43:21 crc kubenswrapper[4809]: I0226 14:43:21.495767 4809 generic.go:334] "Generic (PLEG): container finished" podID="3b753f14-9d84-40a0-963f-233a8d25d27f" containerID="249b416ff52e64602436ff0aec2ff70da8ee80718fe6eda9ef8905c72730ab94" exitCode=0 Feb 26 14:43:21 crc kubenswrapper[4809]: I0226 14:43:21.495835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" event={"ID":"3b753f14-9d84-40a0-963f-233a8d25d27f","Type":"ContainerDied","Data":"249b416ff52e64602436ff0aec2ff70da8ee80718fe6eda9ef8905c72730ab94"} Feb 26 14:43:21 crc kubenswrapper[4809]: I0226 14:43:21.498680 4809 generic.go:334] "Generic (PLEG): container finished" podID="7f82ff03-c683-42f7-9461-9267647c4698" containerID="5d28e81af027b225f91425c6f02f5fb20e2d32a5542361e23d20658f1d2a42fc" exitCode=143 Feb 26 14:43:21 crc kubenswrapper[4809]: I0226 14:43:21.498729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerDied","Data":"5d28e81af027b225f91425c6f02f5fb20e2d32a5542361e23d20658f1d2a42fc"} Feb 26 14:43:21 crc kubenswrapper[4809]: I0226 14:43:21.806852 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:43:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:43:21 crc kubenswrapper[4809]: > Feb 26 14:43:22 crc kubenswrapper[4809]: I0226 14:43:22.513248 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerStarted","Data":"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad"} Feb 26 14:43:22 crc kubenswrapper[4809]: I0226 14:43:22.513872 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:43:22 crc kubenswrapper[4809]: I0226 14:43:22.553691 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.508066308 podStartE2EDuration="7.553649407s" podCreationTimestamp="2026-02-26 14:43:15 +0000 UTC" firstStartedPulling="2026-02-26 14:43:16.457109307 +0000 UTC m=+1774.930429830" lastFinishedPulling="2026-02-26 14:43:21.502692406 +0000 UTC m=+1779.976012929" observedRunningTime="2026-02-26 14:43:22.542888671 +0000 UTC m=+1781.016209204" watchObservedRunningTime="2026-02-26 14:43:22.553649407 +0000 UTC m=+1781.026969940" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.017182 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.113367 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.114766 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.119470 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.119555 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" containerName="nova-scheduler-scheduler" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.133006 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data\") pod \"3b753f14-9d84-40a0-963f-233a8d25d27f\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.133113 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts\") pod \"3b753f14-9d84-40a0-963f-233a8d25d27f\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.133210 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle\") pod \"3b753f14-9d84-40a0-963f-233a8d25d27f\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.133252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqpj\" (UniqueName: \"kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj\") pod \"3b753f14-9d84-40a0-963f-233a8d25d27f\" (UID: \"3b753f14-9d84-40a0-963f-233a8d25d27f\") " Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.141159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts" (OuterVolumeSpecName: "scripts") pod "3b753f14-9d84-40a0-963f-233a8d25d27f" (UID: "3b753f14-9d84-40a0-963f-233a8d25d27f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.142822 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj" (OuterVolumeSpecName: "kube-api-access-kxqpj") pod "3b753f14-9d84-40a0-963f-233a8d25d27f" (UID: "3b753f14-9d84-40a0-963f-233a8d25d27f"). InnerVolumeSpecName "kube-api-access-kxqpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.177629 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b753f14-9d84-40a0-963f-233a8d25d27f" (UID: "3b753f14-9d84-40a0-963f-233a8d25d27f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.183280 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data" (OuterVolumeSpecName: "config-data") pod "3b753f14-9d84-40a0-963f-233a8d25d27f" (UID: "3b753f14-9d84-40a0-963f-233a8d25d27f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.235863 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.235899 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.235909 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b753f14-9d84-40a0-963f-233a8d25d27f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.235918 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqpj\" (UniqueName: \"kubernetes.io/projected/3b753f14-9d84-40a0-963f-233a8d25d27f-kube-api-access-kxqpj\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.528130 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.536977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7nvf" event={"ID":"3b753f14-9d84-40a0-963f-233a8d25d27f","Type":"ContainerDied","Data":"7ef60a9e713e5414ba020cfbd8d96589f40d7762069e18c62d0a041f1ffb3db5"} Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.537005 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ef60a9e713e5414ba020cfbd8d96589f40d7762069e18c62d0a041f1ffb3db5" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.647666 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.648627 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b753f14-9d84-40a0-963f-233a8d25d27f" containerName="nova-cell1-conductor-db-sync" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.648655 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b753f14-9d84-40a0-963f-233a8d25d27f" containerName="nova-cell1-conductor-db-sync" Feb 26 14:43:23 crc kubenswrapper[4809]: E0226 14:43:23.648679 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538bc3b6-9ed2-48da-8596-55ca0077a9df" containerName="nova-manage" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.648688 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="538bc3b6-9ed2-48da-8596-55ca0077a9df" containerName="nova-manage" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.648990 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b753f14-9d84-40a0-963f-233a8d25d27f" containerName="nova-cell1-conductor-db-sync" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.649048 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="538bc3b6-9ed2-48da-8596-55ca0077a9df" containerName="nova-manage" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.650236 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.661373 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.668328 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.760517 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbz8g\" (UniqueName: \"kubernetes.io/projected/5bb3781f-0618-426b-a950-2edc6c6e9317-kube-api-access-hbz8g\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.760975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.761116 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.862885 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.862972 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.863086 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbz8g\" (UniqueName: \"kubernetes.io/projected/5bb3781f-0618-426b-a950-2edc6c6e9317-kube-api-access-hbz8g\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.877692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.881981 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bb3781f-0618-426b-a950-2edc6c6e9317-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.895966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbz8g\" (UniqueName: \"kubernetes.io/projected/5bb3781f-0618-426b-a950-2edc6c6e9317-kube-api-access-hbz8g\") pod \"nova-cell1-conductor-0\" (UID: \"5bb3781f-0618-426b-a950-2edc6c6e9317\") " pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:23 crc kubenswrapper[4809]: I0226 14:43:23.976400 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.546133 4809 generic.go:334] "Generic (PLEG): container finished" podID="7f82ff03-c683-42f7-9461-9267647c4698" containerID="fdc15ec388008b450b455be3e5f045edcba6b7d8ab726b1a32d3f448b3337b13" exitCode=0 Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.546183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerDied","Data":"fdc15ec388008b450b455be3e5f045edcba6b7d8ab726b1a32d3f448b3337b13"} Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.655347 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.821212 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.928159 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle\") pod \"7f82ff03-c683-42f7-9461-9267647c4698\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.928294 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs\") pod \"7f82ff03-c683-42f7-9461-9267647c4698\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.928449 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data\") pod \"7f82ff03-c683-42f7-9461-9267647c4698\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.928504 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcb29\" (UniqueName: \"kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29\") pod \"7f82ff03-c683-42f7-9461-9267647c4698\" (UID: \"7f82ff03-c683-42f7-9461-9267647c4698\") " Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.930598 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs" (OuterVolumeSpecName: "logs") pod "7f82ff03-c683-42f7-9461-9267647c4698" (UID: "7f82ff03-c683-42f7-9461-9267647c4698"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.952382 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29" (OuterVolumeSpecName: "kube-api-access-fcb29") pod "7f82ff03-c683-42f7-9461-9267647c4698" (UID: "7f82ff03-c683-42f7-9461-9267647c4698"). InnerVolumeSpecName "kube-api-access-fcb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.960735 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f82ff03-c683-42f7-9461-9267647c4698" (UID: "7f82ff03-c683-42f7-9461-9267647c4698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:24 crc kubenswrapper[4809]: I0226 14:43:24.965362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data" (OuterVolumeSpecName: "config-data") pod "7f82ff03-c683-42f7-9461-9267647c4698" (UID: "7f82ff03-c683-42f7-9461-9267647c4698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.031639 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f82ff03-c683-42f7-9461-9267647c4698-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.031679 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.031694 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcb29\" (UniqueName: \"kubernetes.io/projected/7f82ff03-c683-42f7-9461-9267647c4698-kube-api-access-fcb29\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.031711 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f82ff03-c683-42f7-9461-9267647c4698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.266355 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.337167 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle\") pod \"3a480769-4696-4ae8-a896-af49f97b10c9\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.337256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data\") pod \"3a480769-4696-4ae8-a896-af49f97b10c9\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.337335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt9x9\" (UniqueName: \"kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9\") pod \"3a480769-4696-4ae8-a896-af49f97b10c9\" (UID: \"3a480769-4696-4ae8-a896-af49f97b10c9\") " Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.343282 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9" (OuterVolumeSpecName: "kube-api-access-wt9x9") pod "3a480769-4696-4ae8-a896-af49f97b10c9" (UID: "3a480769-4696-4ae8-a896-af49f97b10c9"). InnerVolumeSpecName "kube-api-access-wt9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.376212 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a480769-4696-4ae8-a896-af49f97b10c9" (UID: "3a480769-4696-4ae8-a896-af49f97b10c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.398620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data" (OuterVolumeSpecName: "config-data") pod "3a480769-4696-4ae8-a896-af49f97b10c9" (UID: "3a480769-4696-4ae8-a896-af49f97b10c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.440584 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.440629 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a480769-4696-4ae8-a896-af49f97b10c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.440643 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt9x9\" (UniqueName: \"kubernetes.io/projected/3a480769-4696-4ae8-a896-af49f97b10c9-kube-api-access-wt9x9\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.558944 4809 generic.go:334] "Generic (PLEG): container finished" podID="3a480769-4696-4ae8-a896-af49f97b10c9" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" exitCode=0 Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.559023 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a480769-4696-4ae8-a896-af49f97b10c9","Type":"ContainerDied","Data":"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277"} Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.559080 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a480769-4696-4ae8-a896-af49f97b10c9","Type":"ContainerDied","Data":"73c1daa8286e53cf7c6ac8ea0668e29be3408ad72df390ad45ee3d22369af2cc"} Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.559107 4809 scope.go:117] "RemoveContainer" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.560341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bb3781f-0618-426b-a950-2edc6c6e9317","Type":"ContainerStarted","Data":"2aa97e8a0e5aff9c4140c90bbca2386fcd6c6393f3d700b6f1ad02f33972e937"} Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.560384 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bb3781f-0618-426b-a950-2edc6c6e9317","Type":"ContainerStarted","Data":"ae3dd5ceec124b26c25ff8200c07f099e4f6c760132c3ee8dc9c44cb3de29b82"} Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.560458 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.561419 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.563283 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f82ff03-c683-42f7-9461-9267647c4698","Type":"ContainerDied","Data":"7b4cb95c92c8df499a6cde48c33bc7d55a4ee34de17f83afa4622a45757aceba"} Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.563316 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.591205 4809 scope.go:117] "RemoveContainer" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" Feb 26 14:43:25 crc kubenswrapper[4809]: E0226 14:43:25.595383 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277\": container with ID starting with 79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277 not found: ID does not exist" containerID="79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.595446 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277"} err="failed to get container status \"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277\": rpc error: code = NotFound desc = could not find container \"79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277\": container with ID starting with 79a552b45c2c77051b77983cefd6ab32736db5605b82218844a062cedfb32277 not found: ID does not exist" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.595480 4809 scope.go:117] "RemoveContainer" containerID="fdc15ec388008b450b455be3e5f045edcba6b7d8ab726b1a32d3f448b3337b13" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.617097 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6170749730000002 podStartE2EDuration="2.617074973s" podCreationTimestamp="2026-02-26 14:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:25.580157044 +0000 UTC m=+1784.053477587" watchObservedRunningTime="2026-02-26 14:43:25.617074973 +0000 UTC m=+1784.090395496" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.638499 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.653803 4809 scope.go:117] "RemoveContainer" containerID="5d28e81af027b225f91425c6f02f5fb20e2d32a5542361e23d20658f1d2a42fc" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.655184 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.670722 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.703943 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.721526 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: E0226 14:43:25.722067 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" containerName="nova-scheduler-scheduler" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722088 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" containerName="nova-scheduler-scheduler" Feb 26 14:43:25 crc kubenswrapper[4809]: E0226 14:43:25.722113 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-log" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722120 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-log" Feb 26 14:43:25 crc kubenswrapper[4809]: E0226 14:43:25.722171 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-api" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722178 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-api" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722398 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" containerName="nova-scheduler-scheduler" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722435 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-log" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.722449 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f82ff03-c683-42f7-9461-9267647c4698" containerName="nova-api-api" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.723680 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.726285 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.733346 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.734931 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.740969 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.748375 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.760043 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.849742 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.849823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.849904 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.849928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.850034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.850090 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.850134 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4hp\" (UniqueName: \"kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.951864 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952143 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952262 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4hp\" (UniqueName: \"kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952502 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.952707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.956038 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.957332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.957694 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.958150 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.960980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.970526 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7\") pod \"nova-api-0\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " pod="openstack/nova-api-0" Feb 26 14:43:25 crc kubenswrapper[4809]: I0226 14:43:25.972400 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4hp\" (UniqueName: \"kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp\") pod \"nova-scheduler-0\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " pod="openstack/nova-scheduler-0" Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.055343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.064162 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.282269 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a480769-4696-4ae8-a896-af49f97b10c9" path="/var/lib/kubelet/pods/3a480769-4696-4ae8-a896-af49f97b10c9/volumes" Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.282961 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f82ff03-c683-42f7-9461-9267647c4698" path="/var/lib/kubelet/pods/7f82ff03-c683-42f7-9461-9267647c4698/volumes" Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.624650 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:26 crc kubenswrapper[4809]: I0226 14:43:26.638987 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:43:26 crc kubenswrapper[4809]: W0226 14:43:26.649590 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a668464_1cf9_492a_8e9a_4f712b7c854c.slice/crio-69d8eb29a588cab4f1e0d9a384680ac94e9a766adf795b5d41fd876ae49481c8 WatchSource:0}: Error finding container 69d8eb29a588cab4f1e0d9a384680ac94e9a766adf795b5d41fd876ae49481c8: Status 404 returned error can't find the container with id 69d8eb29a588cab4f1e0d9a384680ac94e9a766adf795b5d41fd876ae49481c8 Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.593720 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a668464-1cf9-492a-8e9a-4f712b7c854c","Type":"ContainerStarted","Data":"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52"} Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.594278 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a668464-1cf9-492a-8e9a-4f712b7c854c","Type":"ContainerStarted","Data":"69d8eb29a588cab4f1e0d9a384680ac94e9a766adf795b5d41fd876ae49481c8"} Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.597363 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerStarted","Data":"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80"} Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.597398 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerStarted","Data":"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b"} Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.597409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerStarted","Data":"f034654a592a2c14dfe451cd6a13c41c8940954ce9b953f324fabb6a4ac4ac8f"} Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.612466 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.61244002 podStartE2EDuration="2.61244002s" podCreationTimestamp="2026-02-26 14:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:27.610103454 +0000 UTC m=+1786.083423987" watchObservedRunningTime="2026-02-26 14:43:27.61244002 +0000 UTC m=+1786.085760543" Feb 26 14:43:27 crc kubenswrapper[4809]: I0226 14:43:27.662653 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.662633125 podStartE2EDuration="2.662633125s" podCreationTimestamp="2026-02-26 14:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:27.66244496 +0000 UTC m=+1786.135765493" watchObservedRunningTime="2026-02-26 14:43:27.662633125 +0000 UTC m=+1786.135953648" Feb 26 14:43:28 crc kubenswrapper[4809]: I0226 14:43:28.257459 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:43:28 crc kubenswrapper[4809]: E0226 14:43:28.257810 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:43:31 crc kubenswrapper[4809]: I0226 14:43:31.064341 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 14:43:31 crc kubenswrapper[4809]: I0226 14:43:31.804234 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" probeResult="failure" output=< Feb 26 14:43:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:43:31 crc kubenswrapper[4809]: > Feb 26 14:43:34 crc kubenswrapper[4809]: I0226 14:43:34.006656 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 14:43:36 crc kubenswrapper[4809]: I0226 14:43:36.056558 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:43:36 crc kubenswrapper[4809]: I0226 14:43:36.058370 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:43:36 crc kubenswrapper[4809]: I0226 14:43:36.064928 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 14:43:36 crc kubenswrapper[4809]: I0226 14:43:36.205991 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 14:43:36 crc kubenswrapper[4809]: I0226 14:43:36.732411 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 14:43:37 crc kubenswrapper[4809]: I0226 14:43:37.139227 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:37 crc kubenswrapper[4809]: I0226 14:43:37.139322 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.1:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:40 crc kubenswrapper[4809]: I0226 14:43:40.797308 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:43:40 crc kubenswrapper[4809]: I0226 14:43:40.855419 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.041777 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.258106 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:43:41 crc kubenswrapper[4809]: E0226 14:43:41.258465 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.479239 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.648074 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9xkk\" (UniqueName: \"kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk\") pod \"0216ea6d-d0cc-423f-81e0-a49c95148181\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.648230 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle\") pod \"0216ea6d-d0cc-423f-81e0-a49c95148181\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.648419 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data\") pod \"0216ea6d-d0cc-423f-81e0-a49c95148181\" (UID: \"0216ea6d-d0cc-423f-81e0-a49c95148181\") " Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.654156 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk" (OuterVolumeSpecName: "kube-api-access-r9xkk") pod "0216ea6d-d0cc-423f-81e0-a49c95148181" (UID: "0216ea6d-d0cc-423f-81e0-a49c95148181"). InnerVolumeSpecName "kube-api-access-r9xkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.685030 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0216ea6d-d0cc-423f-81e0-a49c95148181" (UID: "0216ea6d-d0cc-423f-81e0-a49c95148181"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.693850 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data" (OuterVolumeSpecName: "config-data") pod "0216ea6d-d0cc-423f-81e0-a49c95148181" (UID: "0216ea6d-d0cc-423f-81e0-a49c95148181"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.751852 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9xkk\" (UniqueName: \"kubernetes.io/projected/0216ea6d-d0cc-423f-81e0-a49c95148181-kube-api-access-r9xkk\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.752957 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.753075 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216ea6d-d0cc-423f-81e0-a49c95148181-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.805777 4809 generic.go:334] "Generic (PLEG): container finished" podID="0216ea6d-d0cc-423f-81e0-a49c95148181" containerID="35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3" exitCode=137 Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.806746 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.808767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0216ea6d-d0cc-423f-81e0-a49c95148181","Type":"ContainerDied","Data":"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3"} Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.808827 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0216ea6d-d0cc-423f-81e0-a49c95148181","Type":"ContainerDied","Data":"a1e71690ede76954b43d2fd94b4db24f0cdcfdde9d36dc0baae3ff7cf0c360dc"} Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.808849 4809 scope.go:117] "RemoveContainer" containerID="35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.844259 4809 scope.go:117] "RemoveContainer" containerID="35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3" Feb 26 14:43:41 crc kubenswrapper[4809]: E0226 14:43:41.844873 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3\": container with ID starting with 35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3 not found: ID does not exist" containerID="35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.844915 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3"} err="failed to get container status \"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3\": rpc error: code = NotFound desc = could not find container \"35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3\": container with ID starting with 35cd1fd52f46c69228d497ead02995f94b4e1a335f77378f377efde232a183d3 not found: ID does not exist" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.849277 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.865082 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.881195 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:41 crc kubenswrapper[4809]: E0226 14:43:41.881967 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216ea6d-d0cc-423f-81e0-a49c95148181" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.881987 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216ea6d-d0cc-423f-81e0-a49c95148181" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.882299 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216ea6d-d0cc-423f-81e0-a49c95148181" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.883396 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.887077 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.887282 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.887455 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.894160 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.957608 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.957647 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.957791 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.957832 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9qwj\" (UniqueName: \"kubernetes.io/projected/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-kube-api-access-g9qwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:41 crc kubenswrapper[4809]: I0226 14:43:41.957873 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.059809 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.059870 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9qwj\" (UniqueName: \"kubernetes.io/projected/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-kube-api-access-g9qwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.059920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.059989 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.060024 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.064758 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.065587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.066419 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.066856 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.075100 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9qwj\" (UniqueName: \"kubernetes.io/projected/d7e7790d-ec12-4f49-acf4-cf7c9b8680c2-kube-api-access-g9qwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.201782 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.313816 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0216ea6d-d0cc-423f-81e0-a49c95148181" path="/var/lib/kubelet/pods/0216ea6d-d0cc-423f-81e0-a49c95148181/volumes" Feb 26 14:43:42 crc kubenswrapper[4809]: W0226 14:43:42.708447 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7e7790d_ec12_4f49_acf4_cf7c9b8680c2.slice/crio-71145c54bb4a0052ff131a56ecabe232a0dc1f604506443f21504f7d71317009 WatchSource:0}: Error finding container 71145c54bb4a0052ff131a56ecabe232a0dc1f604506443f21504f7d71317009: Status 404 returned error can't find the container with id 71145c54bb4a0052ff131a56ecabe232a0dc1f604506443f21504f7d71317009 Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.713433 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.824216 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2","Type":"ContainerStarted","Data":"71145c54bb4a0052ff131a56ecabe232a0dc1f604506443f21504f7d71317009"} Feb 26 14:43:42 crc kubenswrapper[4809]: I0226 14:43:42.824423 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkxlc" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" containerID="cri-o://4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09" gracePeriod=2 Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.345280 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.511967 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24cp\" (UniqueName: \"kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp\") pod \"93d04d33-c05e-4533-b03a-a5672ac77b7f\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.512113 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities\") pod \"93d04d33-c05e-4533-b03a-a5672ac77b7f\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.512216 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content\") pod \"93d04d33-c05e-4533-b03a-a5672ac77b7f\" (UID: \"93d04d33-c05e-4533-b03a-a5672ac77b7f\") " Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.516145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities" (OuterVolumeSpecName: "utilities") pod "93d04d33-c05e-4533-b03a-a5672ac77b7f" (UID: "93d04d33-c05e-4533-b03a-a5672ac77b7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.531283 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp" (OuterVolumeSpecName: "kube-api-access-c24cp") pod "93d04d33-c05e-4533-b03a-a5672ac77b7f" (UID: "93d04d33-c05e-4533-b03a-a5672ac77b7f"). InnerVolumeSpecName "kube-api-access-c24cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.616222 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24cp\" (UniqueName: \"kubernetes.io/projected/93d04d33-c05e-4533-b03a-a5672ac77b7f-kube-api-access-c24cp\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.616262 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.630931 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93d04d33-c05e-4533-b03a-a5672ac77b7f" (UID: "93d04d33-c05e-4533-b03a-a5672ac77b7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.720416 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93d04d33-c05e-4533-b03a-a5672ac77b7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.838859 4809 generic.go:334] "Generic (PLEG): container finished" podID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerID="4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09" exitCode=0 Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.838928 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09"} Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.838970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkxlc" event={"ID":"93d04d33-c05e-4533-b03a-a5672ac77b7f","Type":"ContainerDied","Data":"0eab694ce976e2bd91f427054791d25a3e8c6fb5df0ca617a24c1e6a21b21c3c"} Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.838987 4809 scope.go:117] "RemoveContainer" containerID="4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.839140 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkxlc" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.841997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d7e7790d-ec12-4f49-acf4-cf7c9b8680c2","Type":"ContainerStarted","Data":"6973fe2a9b667f00216b8abc74408c18c7c306d78e9d629b5246e0834bc07751"} Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.872087 4809 scope.go:117] "RemoveContainer" containerID="f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.874995 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.874971241 podStartE2EDuration="2.874971241s" podCreationTimestamp="2026-02-26 14:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:43.867996823 +0000 UTC m=+1802.341317516" watchObservedRunningTime="2026-02-26 14:43:43.874971241 +0000 UTC m=+1802.348291764" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.904725 4809 scope.go:117] "RemoveContainer" containerID="42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.931912 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.945183 4809 scope.go:117] "RemoveContainer" containerID="55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d" Feb 26 14:43:43 crc kubenswrapper[4809]: I0226 14:43:43.949742 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkxlc"] Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.026554 4809 scope.go:117] "RemoveContainer" containerID="4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.026992 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09\": container with ID starting with 4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09 not found: ID does not exist" containerID="4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027040 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09"} err="failed to get container status \"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09\": rpc error: code = NotFound desc = could not find container \"4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09\": container with ID starting with 4a9a30fff55e5441b82b000300bc27b3bd4b5e464727f77a1edab4c4664dea09 not found: ID does not exist" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027062 4809 scope.go:117] "RemoveContainer" containerID="f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.027472 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b\": container with ID starting with f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b not found: ID does not exist" containerID="f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027490 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b"} err="failed to get container status \"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b\": rpc error: code = NotFound desc = could not find container \"f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b\": container with ID starting with f760eb809115556311c63dc3f3ca07b1b184ea21dd5d661e13cd7e276144161b not found: ID does not exist" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027504 4809 scope.go:117] "RemoveContainer" containerID="42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.027836 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59\": container with ID starting with 42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59 not found: ID does not exist" containerID="42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027857 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59"} err="failed to get container status \"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59\": rpc error: code = NotFound desc = could not find container \"42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59\": container with ID starting with 42fba973affe82ad961e648dd5600f3e589fd47bd2a1cf768dfdea81de829b59 not found: ID does not exist" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.027870 4809 scope.go:117] "RemoveContainer" containerID="55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.028218 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d\": container with ID starting with 55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d not found: ID does not exist" containerID="55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.028237 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d"} err="failed to get container status \"55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d\": rpc error: code = NotFound desc = could not find container \"55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d\": container with ID starting with 55db232c2a29d5f0dd845f08d067c13c9f80f56d34c6438bf3e467a7dcd96a6d not found: ID does not exist" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.276583 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" path="/var/lib/kubelet/pods/93d04d33-c05e-4533-b03a-a5672ac77b7f/volumes" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.329489 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1243c611-5f78-4234-8c54-b9999a2fd507"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1243c611-5f78-4234-8c54-b9999a2fd507] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1243c611_5f78_4234_8c54_b9999a2fd507.slice" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.329536 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod1243c611-5f78-4234-8c54-b9999a2fd507] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod1243c611-5f78-4234-8c54-b9999a2fd507] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1243c611_5f78_4234_8c54_b9999a2fd507.slice" pod="openstack/nova-metadata-0" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.858192 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.892138 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.908244 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.940499 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.941555 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="extract-content" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.941581 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="extract-content" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.941607 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="extract-utilities" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.941616 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="extract-utilities" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.941662 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.941671 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.941689 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.941700 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.942335 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.942378 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.942398 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: E0226 14:43:44.943026 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.943041 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d04d33-c05e-4533-b03a-a5672ac77b7f" containerName="registry-server" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.945283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.951415 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.951903 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.967184 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.971727 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8hx\" (UniqueName: \"kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.971781 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.971879 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.972002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:44 crc kubenswrapper[4809]: I0226 14:43:44.972290 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.074675 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.074797 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.074863 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.074942 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz8hx\" (UniqueName: \"kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.074965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.076212 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.081476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.084908 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.091392 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.095329 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz8hx\" (UniqueName: \"kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx\") pod \"nova-metadata-0\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.271857 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.768928 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:43:45 crc kubenswrapper[4809]: W0226 14:43:45.779943 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c33371c_2bff_4e5e_8f92_c99583b54d6a.slice/crio-44a5c7754478d849d95d4a4ac261689628ecfe56a3601b46df84f82d4626342e WatchSource:0}: Error finding container 44a5c7754478d849d95d4a4ac261689628ecfe56a3601b46df84f82d4626342e: Status 404 returned error can't find the container with id 44a5c7754478d849d95d4a4ac261689628ecfe56a3601b46df84f82d4626342e Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.850136 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 14:43:45 crc kubenswrapper[4809]: I0226 14:43:45.869400 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerStarted","Data":"44a5c7754478d849d95d4a4ac261689628ecfe56a3601b46df84f82d4626342e"} Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.059091 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.059415 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.059912 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.060307 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.061881 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.063650 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.271589 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1243c611-5f78-4234-8c54-b9999a2fd507" path="/var/lib/kubelet/pods/1243c611-5f78-4234-8c54-b9999a2fd507/volumes" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.351564 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.353518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.372846 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518068 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518143 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518244 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518286 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518429 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkwz\" (UniqueName: \"kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.518628 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkwz\" (UniqueName: \"kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621466 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621656 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621705 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.621735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.622565 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.622592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.623221 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.623218 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.623429 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.640463 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkwz\" (UniqueName: \"kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz\") pod \"dnsmasq-dns-79b5d74c8c-g78k4\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.671094 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.887743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerStarted","Data":"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368"} Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.888147 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerStarted","Data":"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e"} Feb 26 14:43:46 crc kubenswrapper[4809]: I0226 14:43:46.911792 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9117681810000002 podStartE2EDuration="2.911768181s" podCreationTimestamp="2026-02-26 14:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:46.905654927 +0000 UTC m=+1805.378975450" watchObservedRunningTime="2026-02-26 14:43:46.911768181 +0000 UTC m=+1805.385088714" Feb 26 14:43:47 crc kubenswrapper[4809]: I0226 14:43:47.201896 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:47 crc kubenswrapper[4809]: I0226 14:43:47.207924 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:43:47 crc kubenswrapper[4809]: W0226 14:43:47.215176 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd162503c_e431_4c79_9c71_f96f5b981f45.slice/crio-d4a46584e15fb7161349e9a3a05b85e0ccfe4cef9f062c317bae933c39a1de08 WatchSource:0}: Error finding container d4a46584e15fb7161349e9a3a05b85e0ccfe4cef9f062c317bae933c39a1de08: Status 404 returned error can't find the container with id d4a46584e15fb7161349e9a3a05b85e0ccfe4cef9f062c317bae933c39a1de08 Feb 26 14:43:47 crc kubenswrapper[4809]: I0226 14:43:47.909920 4809 generic.go:334] "Generic (PLEG): container finished" podID="d162503c-e431-4c79-9c71-f96f5b981f45" containerID="b341e7d88c926e644044a37ead9b07c7ea6522f7155fb5f215bfccfa0884f481" exitCode=0 Feb 26 14:43:47 crc kubenswrapper[4809]: I0226 14:43:47.910062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" event={"ID":"d162503c-e431-4c79-9c71-f96f5b981f45","Type":"ContainerDied","Data":"b341e7d88c926e644044a37ead9b07c7ea6522f7155fb5f215bfccfa0884f481"} Feb 26 14:43:47 crc kubenswrapper[4809]: I0226 14:43:47.910140 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" event={"ID":"d162503c-e431-4c79-9c71-f96f5b981f45","Type":"ContainerStarted","Data":"d4a46584e15fb7161349e9a3a05b85e0ccfe4cef9f062c317bae933c39a1de08"} Feb 26 14:43:48 crc kubenswrapper[4809]: I0226 14:43:48.926252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" event={"ID":"d162503c-e431-4c79-9c71-f96f5b981f45","Type":"ContainerStarted","Data":"09673654fe4d9c56211f5d9e626e6b0613124d76acf90290eecd49efee6aacf5"} Feb 26 14:43:48 crc kubenswrapper[4809]: I0226 14:43:48.926752 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:48 crc kubenswrapper[4809]: I0226 14:43:48.957243 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" podStartSLOduration=2.95722066 podStartE2EDuration="2.95722066s" podCreationTimestamp="2026-02-26 14:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:48.944920851 +0000 UTC m=+1807.418241374" watchObservedRunningTime="2026-02-26 14:43:48.95722066 +0000 UTC m=+1807.430541183" Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.022904 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.023164 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-log" containerID="cri-o://e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.023263 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-api" containerID="cri-o://3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.069256 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.069535 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-central-agent" containerID="cri-o://dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.069589 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="proxy-httpd" containerID="cri-o://696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.069644 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-notification-agent" containerID="cri-o://1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.069642 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="sg-core" containerID="cri-o://dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783" gracePeriod=30 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.955826 4809 generic.go:334] "Generic (PLEG): container finished" podID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerID="e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b" exitCode=143 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.955882 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerDied","Data":"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b"} Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966571 4809 generic.go:334] "Generic (PLEG): container finished" podID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerID="696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad" exitCode=0 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966601 4809 generic.go:334] "Generic (PLEG): container finished" podID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerID="dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783" exitCode=2 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966610 4809 generic.go:334] "Generic (PLEG): container finished" podID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerID="dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9" exitCode=0 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerDied","Data":"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad"} Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerDied","Data":"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783"} Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.966690 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerDied","Data":"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9"} Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.972184 4809 generic.go:334] "Generic (PLEG): container finished" podID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerID="d0847c06530c3fdf0c089b6873b7a912ef34379d9ef4c7b28680d2274e9507c1" exitCode=137 Feb 26 14:43:49 crc kubenswrapper[4809]: I0226 14:43:49.972319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerDied","Data":"d0847c06530c3fdf0c089b6873b7a912ef34379d9ef4c7b28680d2274e9507c1"} Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.287096 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.287148 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.790596 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.928592 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data\") pod \"3133dbd9-9024-4d17-90ca-f254da2382cb\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.928697 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts\") pod \"3133dbd9-9024-4d17-90ca-f254da2382cb\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.928856 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qnkd\" (UniqueName: \"kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd\") pod \"3133dbd9-9024-4d17-90ca-f254da2382cb\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.929008 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle\") pod \"3133dbd9-9024-4d17-90ca-f254da2382cb\" (UID: \"3133dbd9-9024-4d17-90ca-f254da2382cb\") " Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.938527 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd" (OuterVolumeSpecName: "kube-api-access-9qnkd") pod "3133dbd9-9024-4d17-90ca-f254da2382cb" (UID: "3133dbd9-9024-4d17-90ca-f254da2382cb"). InnerVolumeSpecName "kube-api-access-9qnkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:50 crc kubenswrapper[4809]: I0226 14:43:50.943258 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts" (OuterVolumeSpecName: "scripts") pod "3133dbd9-9024-4d17-90ca-f254da2382cb" (UID: "3133dbd9-9024-4d17-90ca-f254da2382cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.021076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3133dbd9-9024-4d17-90ca-f254da2382cb","Type":"ContainerDied","Data":"0690e42bfcc9d3d668e747cf189e986d203e92bc300dd364631b8e1dc86d9a04"} Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.027395 4809 scope.go:117] "RemoveContainer" containerID="d0847c06530c3fdf0c089b6873b7a912ef34379d9ef4c7b28680d2274e9507c1" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.021184 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.031954 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qnkd\" (UniqueName: \"kubernetes.io/projected/3133dbd9-9024-4d17-90ca-f254da2382cb-kube-api-access-9qnkd\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.032206 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.132124 4809 scope.go:117] "RemoveContainer" containerID="f9e77e35e2fa4212caa6c73f7aa837348389e623db63f37ca0042dd6d275143a" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.166092 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data" (OuterVolumeSpecName: "config-data") pod "3133dbd9-9024-4d17-90ca-f254da2382cb" (UID: "3133dbd9-9024-4d17-90ca-f254da2382cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.171365 4809 scope.go:117] "RemoveContainer" containerID="021bdf02551950e5c37f9405c0dea2b67a12516227aed0da5d55df9ae1d7bb09" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.190638 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3133dbd9-9024-4d17-90ca-f254da2382cb" (UID: "3133dbd9-9024-4d17-90ca-f254da2382cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.196669 4809 scope.go:117] "RemoveContainer" containerID="7cd9decc2cbdf3de17889378131bd5e9c4de3ea0e28f8018dcd29809b304c4d2" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.237448 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.237478 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3133dbd9-9024-4d17-90ca-f254da2382cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.367724 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.378473 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.404193 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:51 crc kubenswrapper[4809]: E0226 14:43:51.405049 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-notifier" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405074 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-notifier" Feb 26 14:43:51 crc kubenswrapper[4809]: E0226 14:43:51.405093 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-evaluator" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405105 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-evaluator" Feb 26 14:43:51 crc kubenswrapper[4809]: E0226 14:43:51.405130 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-listener" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405141 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-listener" Feb 26 14:43:51 crc kubenswrapper[4809]: E0226 14:43:51.405197 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-api" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405208 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-api" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405605 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-notifier" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405655 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-listener" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405679 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-api" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.405701 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" containerName="aodh-evaluator" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.410346 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.414303 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.414729 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.415494 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.415815 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.416651 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6p9fd" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.419872 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.545655 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.545760 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.545844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.545915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.545956 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc9bn\" (UniqueName: \"kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.546035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.647680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.647741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.647809 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.648527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.648648 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.648712 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc9bn\" (UniqueName: \"kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.653422 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.653767 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.654156 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.655130 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.656999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.669271 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc9bn\" (UniqueName: \"kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn\") pod \"aodh-0\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " pod="openstack/aodh-0" Feb 26 14:43:51 crc kubenswrapper[4809]: I0226 14:43:51.736597 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.202097 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.222505 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.319577 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3133dbd9-9024-4d17-90ca-f254da2382cb" path="/var/lib/kubelet/pods/3133dbd9-9024-4d17-90ca-f254da2382cb/volumes" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.350618 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:43:52 crc kubenswrapper[4809]: W0226 14:43:52.358313 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706edc08_ac4a_45bc_9fbc_78c486ecd636.slice/crio-d330a5f592452598d558076feca30b6f048029e5a318c8b6324e947abab54810 WatchSource:0}: Error finding container d330a5f592452598d558076feca30b6f048029e5a318c8b6324e947abab54810: Status 404 returned error can't find the container with id d330a5f592452598d558076feca30b6f048029e5a318c8b6324e947abab54810 Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.363130 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.465226 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.569274 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.569523 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.569580 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.569727 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.570408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.570525 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z74hv\" (UniqueName: \"kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.570588 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd\") pod \"419bd019-b059-44ae-a5df-fe3cf7252aea\" (UID: \"419bd019-b059-44ae-a5df-fe3cf7252aea\") " Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.570142 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.571456 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.571520 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.577487 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts" (OuterVolumeSpecName: "scripts") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.602342 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv" (OuterVolumeSpecName: "kube-api-access-z74hv") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "kube-api-access-z74hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.671102 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.675909 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z74hv\" (UniqueName: \"kubernetes.io/projected/419bd019-b059-44ae-a5df-fe3cf7252aea-kube-api-access-z74hv\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.675941 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/419bd019-b059-44ae-a5df-fe3cf7252aea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.675951 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.675960 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.748864 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.777963 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.781938 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data" (OuterVolumeSpecName: "config-data") pod "419bd019-b059-44ae-a5df-fe3cf7252aea" (UID: "419bd019-b059-44ae-a5df-fe3cf7252aea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:52 crc kubenswrapper[4809]: I0226 14:43:52.879911 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419bd019-b059-44ae-a5df-fe3cf7252aea-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.017078 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.083138 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs\") pod \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.083626 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs" (OuterVolumeSpecName: "logs") pod "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" (UID: "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.083814 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7\") pod \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.083894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle\") pod \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.083932 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data\") pod \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\" (UID: \"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765\") " Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.084924 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.085118 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerStarted","Data":"d330a5f592452598d558076feca30b6f048029e5a318c8b6324e947abab54810"} Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.091433 4809 generic.go:334] "Generic (PLEG): container finished" podID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerID="3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80" exitCode=0 Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.091571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerDied","Data":"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80"} Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.091617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765","Type":"ContainerDied","Data":"f034654a592a2c14dfe451cd6a13c41c8940954ce9b953f324fabb6a4ac4ac8f"} Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.091647 4809 scope.go:117] "RemoveContainer" containerID="3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.091737 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.098325 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7" (OuterVolumeSpecName: "kube-api-access-88gx7") pod "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" (UID: "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765"). InnerVolumeSpecName "kube-api-access-88gx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.125926 4809 generic.go:334] "Generic (PLEG): container finished" podID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerID="1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f" exitCode=0 Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.126930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerDied","Data":"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f"} Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.126988 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"419bd019-b059-44ae-a5df-fe3cf7252aea","Type":"ContainerDied","Data":"57185d8f86f4649567477dbffb22a0aab42067cf3d4ab71838b2b965b1ddf54a"} Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.127036 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.164165 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.169750 4809 scope.go:117] "RemoveContainer" containerID="e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.194596 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-kube-api-access-88gx7\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.212852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" (UID: "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.224614 4809 scope.go:117] "RemoveContainer" containerID="3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.226185 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80\": container with ID starting with 3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80 not found: ID does not exist" containerID="3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.226223 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80"} err="failed to get container status \"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80\": rpc error: code = NotFound desc = could not find container \"3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80\": container with ID starting with 3ea79c555d5df74f16fd89d10d4ef37bcbad78c9fa4a978038c18139cce07e80 not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.226251 4809 scope.go:117] "RemoveContainer" containerID="e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.226751 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b\": container with ID starting with e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b not found: ID does not exist" containerID="e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.226779 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b"} err="failed to get container status \"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b\": rpc error: code = NotFound desc = could not find container \"e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b\": container with ID starting with e73d9b892b73f080beb42122377b970928e6b7a4f46ae1a56fe0eaf12d8df99b not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.226796 4809 scope.go:117] "RemoveContainer" containerID="696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.249872 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data" (OuterVolumeSpecName: "config-data") pod "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" (UID: "1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.277067 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.307724 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.307759 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.308538 4809 scope.go:117] "RemoveContainer" containerID="dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.350064 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.390570 4809 scope.go:117] "RemoveContainer" containerID="1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415178 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415805 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="sg-core" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415822 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="sg-core" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415850 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-log" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415858 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-log" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415876 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-notification-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415885 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-notification-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415911 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-api" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415919 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-api" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415946 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="proxy-httpd" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415953 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="proxy-httpd" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.415971 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-central-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.415979 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-central-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416291 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="sg-core" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416305 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-api" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416320 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="proxy-httpd" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416340 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-central-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416352 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" containerName="ceilometer-notification-agent" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.416373 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" containerName="nova-api-log" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.420191 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.422993 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.425539 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.432743 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.512880 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.512922 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.512965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.512984 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.513082 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.513108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swvz\" (UniqueName: \"kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.513144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.516371 4809 scope.go:117] "RemoveContainer" containerID="dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.587966 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.592280 4809 scope.go:117] "RemoveContainer" containerID="696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.598291 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad\": container with ID starting with 696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad not found: ID does not exist" containerID="696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.598331 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad"} err="failed to get container status \"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad\": rpc error: code = NotFound desc = could not find container \"696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad\": container with ID starting with 696739aba473e33bd91ff04e7f0683fada49f5fcf7bd1b83950df60b70d428ad not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.598356 4809 scope.go:117] "RemoveContainer" containerID="dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.600097 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.602221 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783\": container with ID starting with dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783 not found: ID does not exist" containerID="dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.602257 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783"} err="failed to get container status \"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783\": rpc error: code = NotFound desc = could not find container \"dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783\": container with ID starting with dbb474ee9954f1b35fab0b1ad4905d3c5ae172bf527b862ceaefd6e2c0500783 not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.602279 4809 scope.go:117] "RemoveContainer" containerID="1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.606390 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f\": container with ID starting with 1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f not found: ID does not exist" containerID="1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.606453 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f"} err="failed to get container status \"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f\": rpc error: code = NotFound desc = could not find container \"1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f\": container with ID starting with 1c95841d1bfea0d01e8f77e51a6d40bceb47b23f035285959780d1506e82193f not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.606482 4809 scope.go:117] "RemoveContainer" containerID="dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9" Feb 26 14:43:53 crc kubenswrapper[4809]: E0226 14:43:53.609269 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9\": container with ID starting with dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9 not found: ID does not exist" containerID="dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.609308 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9"} err="failed to get container status \"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9\": rpc error: code = NotFound desc = could not find container \"dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9\": container with ID starting with dd33c46c9df5cf12134e8d31de30370e237bdde0d00688863377dddfb9dabfb9 not found: ID does not exist" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614754 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614803 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614938 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swvz\" (UniqueName: \"kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.614999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.625566 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.625763 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.625942 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.632482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.633090 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.636996 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.643587 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.645550 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.648738 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.650292 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.653507 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.664701 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swvz\" (UniqueName: \"kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz\") pod \"ceilometer-0\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.667022 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.719535 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.719590 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrbq\" (UniqueName: \"kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.719611 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.720087 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.720295 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.720366 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.747376 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-cqrd5"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.752023 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.756049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.756111 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.758275 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cqrd5"] Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.771264 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.826202 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.826812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62b9q\" (UniqueName: \"kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.826861 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.826948 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.826999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.827032 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.827049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.827066 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.827297 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.827362 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrbq\" (UniqueName: \"kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.829225 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.835612 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.835764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.836029 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.836376 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.846802 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrbq\" (UniqueName: \"kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq\") pod \"nova-api-0\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " pod="openstack/nova-api-0" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.939638 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.939999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.940282 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62b9q\" (UniqueName: \"kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.940321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.945882 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.947540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.951952 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:53 crc kubenswrapper[4809]: I0226 14:43:53.964354 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62b9q\" (UniqueName: \"kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q\") pod \"nova-cell1-cell-mapping-cqrd5\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.081483 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.092186 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.145258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerStarted","Data":"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c"} Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.306410 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765" path="/var/lib/kubelet/pods/1b3f8a51-fa8a-4eee-8b7a-1e275a4b4765/volumes" Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.310351 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419bd019-b059-44ae-a5df-fe3cf7252aea" path="/var/lib/kubelet/pods/419bd019-b059-44ae-a5df-fe3cf7252aea/volumes" Feb 26 14:43:54 crc kubenswrapper[4809]: W0226 14:43:54.380855 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf8775d_ef8b_4f0a_ba47_b088e7331c65.slice/crio-057de352ae1df56d2492518010258f16159d9cf2551bc5952a0d58bbcf06e950 WatchSource:0}: Error finding container 057de352ae1df56d2492518010258f16159d9cf2551bc5952a0d58bbcf06e950: Status 404 returned error can't find the container with id 057de352ae1df56d2492518010258f16159d9cf2551bc5952a0d58bbcf06e950 Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.381397 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:43:54 crc kubenswrapper[4809]: W0226 14:43:54.720323 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4145881d_ecb4_4082_9d47_09915db05fb6.slice/crio-91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd WatchSource:0}: Error finding container 91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd: Status 404 returned error can't find the container with id 91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.723257 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cqrd5"] Feb 26 14:43:54 crc kubenswrapper[4809]: I0226 14:43:54.758579 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.174413 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerStarted","Data":"057de352ae1df56d2492518010258f16159d9cf2551bc5952a0d58bbcf06e950"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.187682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cqrd5" event={"ID":"4145881d-ecb4-4082-9d47-09915db05fb6","Type":"ContainerStarted","Data":"91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.197202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerStarted","Data":"136134d905bda1d3f127af74a24180246adcfc03c461919164b8267353afcac6"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.197258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerStarted","Data":"cabf9be1a9f54bbf354ee8c1f6fd23992b84a2f0464c6dc9370982dd9b1f82e0"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.206826 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerStarted","Data":"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.206867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerStarted","Data":"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059"} Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.216148 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-cqrd5" podStartSLOduration=2.21612781 podStartE2EDuration="2.21612781s" podCreationTimestamp="2026-02-26 14:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:55.212497397 +0000 UTC m=+1813.685817930" watchObservedRunningTime="2026-02-26 14:43:55.21612781 +0000 UTC m=+1813.689448343" Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.272132 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 14:43:55 crc kubenswrapper[4809]: I0226 14:43:55.272185 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.259161 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:43:56 crc kubenswrapper[4809]: E0226 14:43:56.261938 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.306978 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerStarted","Data":"c3346d517dd7b8f70001d33a4f5a3e2f74afa3bda045be3fbd58f88c7f298b3b"} Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.337112 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerStarted","Data":"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1"} Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.320177 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.337169 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cqrd5" event={"ID":"4145881d-ecb4-4082-9d47-09915db05fb6","Type":"ContainerStarted","Data":"197f34b03c1f5fa85c062535aeb7f5f41da4c5852984d61b88c0171a04078e86"} Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.320225 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.346948 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.346924271 podStartE2EDuration="3.346924271s" podCreationTimestamp="2026-02-26 14:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:43:56.324592191 +0000 UTC m=+1814.797912714" watchObservedRunningTime="2026-02-26 14:43:56.346924271 +0000 UTC m=+1814.820244804" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.673571 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.804548 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:43:56 crc kubenswrapper[4809]: I0226 14:43:56.804994 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="dnsmasq-dns" containerID="cri-o://0901168ef5f0487898a833ef432b345526a0ca2a40f9139ca62318a7c5e09b6a" gracePeriod=10 Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.305792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerStarted","Data":"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc"} Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.342839 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerStarted","Data":"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56"} Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.357121 4809 generic.go:334] "Generic (PLEG): container finished" podID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerID="0901168ef5f0487898a833ef432b345526a0ca2a40f9139ca62318a7c5e09b6a" exitCode=0 Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.358288 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" event={"ID":"58e7e511-833d-49d6-bff6-d490bf3293d0","Type":"ContainerDied","Data":"0901168ef5f0487898a833ef432b345526a0ca2a40f9139ca62318a7c5e09b6a"} Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.808190 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.857738 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.626835672 podStartE2EDuration="6.857722297s" podCreationTimestamp="2026-02-26 14:43:51 +0000 UTC" firstStartedPulling="2026-02-26 14:43:52.362757149 +0000 UTC m=+1810.836077672" lastFinishedPulling="2026-02-26 14:43:55.593643774 +0000 UTC m=+1814.066964297" observedRunningTime="2026-02-26 14:43:57.359261423 +0000 UTC m=+1815.832581946" watchObservedRunningTime="2026-02-26 14:43:57.857722297 +0000 UTC m=+1816.331042820" Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885226 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885299 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885355 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885451 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4fk4\" (UniqueName: \"kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885489 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.885543 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config\") pod \"58e7e511-833d-49d6-bff6-d490bf3293d0\" (UID: \"58e7e511-833d-49d6-bff6-d490bf3293d0\") " Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.932275 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4" (OuterVolumeSpecName: "kube-api-access-n4fk4") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "kube-api-access-n4fk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:43:57 crc kubenswrapper[4809]: I0226 14:43:57.991420 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4fk4\" (UniqueName: \"kubernetes.io/projected/58e7e511-833d-49d6-bff6-d490bf3293d0-kube-api-access-n4fk4\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.040860 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.053349 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.053713 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.112585 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.115167 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.115200 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.115216 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.115228 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.161259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config" (OuterVolumeSpecName: "config") pod "58e7e511-833d-49d6-bff6-d490bf3293d0" (UID: "58e7e511-833d-49d6-bff6-d490bf3293d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.217618 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58e7e511-833d-49d6-bff6-d490bf3293d0-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.378805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerStarted","Data":"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975"} Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.381589 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" event={"ID":"58e7e511-833d-49d6-bff6-d490bf3293d0","Type":"ContainerDied","Data":"e1a03fa39295a9cae43bf6981ccfdca5d466e908350f862bd2aad6070437928a"} Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.381655 4809 scope.go:117] "RemoveContainer" containerID="0901168ef5f0487898a833ef432b345526a0ca2a40f9139ca62318a7c5e09b6a" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.382003 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-6v92w" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.422566 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.426730 4809 scope.go:117] "RemoveContainer" containerID="86ac92af8096abe44e43e0472db00406a608518de672143971c03f1700c6df93" Feb 26 14:43:58 crc kubenswrapper[4809]: I0226 14:43:58.435367 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-6v92w"] Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.148327 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535284-wkthg"] Feb 26 14:44:00 crc kubenswrapper[4809]: E0226 14:44:00.149565 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="init" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.149582 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="init" Feb 26 14:44:00 crc kubenswrapper[4809]: E0226 14:44:00.149612 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="dnsmasq-dns" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.149619 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="dnsmasq-dns" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.149873 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" containerName="dnsmasq-dns" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.151804 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.161345 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.162041 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.162396 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.170897 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-wkthg"] Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.271961 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e7e511-833d-49d6-bff6-d490bf3293d0" path="/var/lib/kubelet/pods/58e7e511-833d-49d6-bff6-d490bf3293d0/volumes" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.283234 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4b4\" (UniqueName: \"kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4\") pod \"auto-csr-approver-29535284-wkthg\" (UID: \"c1b502e6-3aba-436c-b8c2-ef8a4d18e607\") " pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.386356 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4b4\" (UniqueName: \"kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4\") pod \"auto-csr-approver-29535284-wkthg\" (UID: \"c1b502e6-3aba-436c-b8c2-ef8a4d18e607\") " pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.406920 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4b4\" (UniqueName: \"kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4\") pod \"auto-csr-approver-29535284-wkthg\" (UID: \"c1b502e6-3aba-436c-b8c2-ef8a4d18e607\") " pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.433128 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerStarted","Data":"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88"} Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.434819 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.482525 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.680937116 podStartE2EDuration="7.482502756s" podCreationTimestamp="2026-02-26 14:43:53 +0000 UTC" firstStartedPulling="2026-02-26 14:43:54.39625016 +0000 UTC m=+1812.869570683" lastFinishedPulling="2026-02-26 14:43:59.1978158 +0000 UTC m=+1817.671136323" observedRunningTime="2026-02-26 14:44:00.465148329 +0000 UTC m=+1818.938468872" watchObservedRunningTime="2026-02-26 14:44:00.482502756 +0000 UTC m=+1818.955823279" Feb 26 14:44:00 crc kubenswrapper[4809]: I0226 14:44:00.492412 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:01 crc kubenswrapper[4809]: I0226 14:44:01.023771 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-wkthg"] Feb 26 14:44:01 crc kubenswrapper[4809]: I0226 14:44:01.446896 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-wkthg" event={"ID":"c1b502e6-3aba-436c-b8c2-ef8a4d18e607","Type":"ContainerStarted","Data":"c819bfd221a31042bd8764a4df8290c4dbfa5d82c60ab9da628eb1d8c378dd8c"} Feb 26 14:44:02 crc kubenswrapper[4809]: I0226 14:44:02.463159 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-wkthg" event={"ID":"c1b502e6-3aba-436c-b8c2-ef8a4d18e607","Type":"ContainerStarted","Data":"27650a03bfc0d833f9dbaa1fa0b4182d8e149d2caa2a05e0e16dd92072d7f794"} Feb 26 14:44:02 crc kubenswrapper[4809]: I0226 14:44:02.469861 4809 generic.go:334] "Generic (PLEG): container finished" podID="4145881d-ecb4-4082-9d47-09915db05fb6" containerID="197f34b03c1f5fa85c062535aeb7f5f41da4c5852984d61b88c0171a04078e86" exitCode=0 Feb 26 14:44:02 crc kubenswrapper[4809]: I0226 14:44:02.469929 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cqrd5" event={"ID":"4145881d-ecb4-4082-9d47-09915db05fb6","Type":"ContainerDied","Data":"197f34b03c1f5fa85c062535aeb7f5f41da4c5852984d61b88c0171a04078e86"} Feb 26 14:44:02 crc kubenswrapper[4809]: I0226 14:44:02.495531 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535284-wkthg" podStartSLOduration=1.647787481 podStartE2EDuration="2.495511893s" podCreationTimestamp="2026-02-26 14:44:00 +0000 UTC" firstStartedPulling="2026-02-26 14:44:01.027509805 +0000 UTC m=+1819.500830328" lastFinishedPulling="2026-02-26 14:44:01.875234217 +0000 UTC m=+1820.348554740" observedRunningTime="2026-02-26 14:44:02.483712395 +0000 UTC m=+1820.957032918" watchObservedRunningTime="2026-02-26 14:44:02.495511893 +0000 UTC m=+1820.968832426" Feb 26 14:44:03 crc kubenswrapper[4809]: I0226 14:44:03.491340 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1b502e6-3aba-436c-b8c2-ef8a4d18e607" containerID="27650a03bfc0d833f9dbaa1fa0b4182d8e149d2caa2a05e0e16dd92072d7f794" exitCode=0 Feb 26 14:44:03 crc kubenswrapper[4809]: I0226 14:44:03.492141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-wkthg" event={"ID":"c1b502e6-3aba-436c-b8c2-ef8a4d18e607","Type":"ContainerDied","Data":"27650a03bfc0d833f9dbaa1fa0b4182d8e149d2caa2a05e0e16dd92072d7f794"} Feb 26 14:44:03 crc kubenswrapper[4809]: I0226 14:44:03.990395 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.083082 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.083134 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.095490 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data\") pod \"4145881d-ecb4-4082-9d47-09915db05fb6\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.095588 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62b9q\" (UniqueName: \"kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q\") pod \"4145881d-ecb4-4082-9d47-09915db05fb6\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.095811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts\") pod \"4145881d-ecb4-4082-9d47-09915db05fb6\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.095875 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle\") pod \"4145881d-ecb4-4082-9d47-09915db05fb6\" (UID: \"4145881d-ecb4-4082-9d47-09915db05fb6\") " Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.102898 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts" (OuterVolumeSpecName: "scripts") pod "4145881d-ecb4-4082-9d47-09915db05fb6" (UID: "4145881d-ecb4-4082-9d47-09915db05fb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.106299 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q" (OuterVolumeSpecName: "kube-api-access-62b9q") pod "4145881d-ecb4-4082-9d47-09915db05fb6" (UID: "4145881d-ecb4-4082-9d47-09915db05fb6"). InnerVolumeSpecName "kube-api-access-62b9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.142784 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4145881d-ecb4-4082-9d47-09915db05fb6" (UID: "4145881d-ecb4-4082-9d47-09915db05fb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.142819 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data" (OuterVolumeSpecName: "config-data") pod "4145881d-ecb4-4082-9d47-09915db05fb6" (UID: "4145881d-ecb4-4082-9d47-09915db05fb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.199726 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.199977 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.200089 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4145881d-ecb4-4082-9d47-09915db05fb6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.200155 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62b9q\" (UniqueName: \"kubernetes.io/projected/4145881d-ecb4-4082-9d47-09915db05fb6-kube-api-access-62b9q\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.507722 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cqrd5" event={"ID":"4145881d-ecb4-4082-9d47-09915db05fb6","Type":"ContainerDied","Data":"91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd"} Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.507768 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d57af49183cc86422f28b42da45666aa6a66b6ecb2735c17c38db959c60dcd" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.511406 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cqrd5" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.714296 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.714711 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-log" containerID="cri-o://136134d905bda1d3f127af74a24180246adcfc03c461919164b8267353afcac6" gracePeriod=30 Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.714894 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-api" containerID="cri-o://c3346d517dd7b8f70001d33a4f5a3e2f74afa3bda045be3fbd58f88c7f298b3b" gracePeriod=30 Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.727275 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": EOF" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.727402 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": EOF" Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.814849 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.815206 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-log" containerID="cri-o://918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e" gracePeriod=30 Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.815346 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-metadata" containerID="cri-o://b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368" gracePeriod=30 Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.858033 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:04 crc kubenswrapper[4809]: I0226 14:44:04.858249 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerName="nova-scheduler-scheduler" containerID="cri-o://5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" gracePeriod=30 Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.059467 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.124190 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k4b4\" (UniqueName: \"kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4\") pod \"c1b502e6-3aba-436c-b8c2-ef8a4d18e607\" (UID: \"c1b502e6-3aba-436c-b8c2-ef8a4d18e607\") " Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.133982 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4" (OuterVolumeSpecName: "kube-api-access-2k4b4") pod "c1b502e6-3aba-436c-b8c2-ef8a4d18e607" (UID: "c1b502e6-3aba-436c-b8c2-ef8a4d18e607"). InnerVolumeSpecName "kube-api-access-2k4b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.227758 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k4b4\" (UniqueName: \"kubernetes.io/projected/c1b502e6-3aba-436c-b8c2-ef8a4d18e607-kube-api-access-2k4b4\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.359497 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-hr5gw"] Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.371038 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535278-hr5gw"] Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.523675 4809 generic.go:334] "Generic (PLEG): container finished" podID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerID="918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e" exitCode=143 Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.523755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerDied","Data":"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e"} Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.526071 4809 generic.go:334] "Generic (PLEG): container finished" podID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerID="136134d905bda1d3f127af74a24180246adcfc03c461919164b8267353afcac6" exitCode=143 Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.526146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerDied","Data":"136134d905bda1d3f127af74a24180246adcfc03c461919164b8267353afcac6"} Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.527729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535284-wkthg" event={"ID":"c1b502e6-3aba-436c-b8c2-ef8a4d18e607","Type":"ContainerDied","Data":"c819bfd221a31042bd8764a4df8290c4dbfa5d82c60ab9da628eb1d8c378dd8c"} Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.527765 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c819bfd221a31042bd8764a4df8290c4dbfa5d82c60ab9da628eb1d8c378dd8c" Feb 26 14:44:05 crc kubenswrapper[4809]: I0226 14:44:05.527790 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535284-wkthg" Feb 26 14:44:06 crc kubenswrapper[4809]: E0226 14:44:06.066216 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:44:06 crc kubenswrapper[4809]: E0226 14:44:06.067955 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:44:06 crc kubenswrapper[4809]: E0226 14:44:06.068903 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 14:44:06 crc kubenswrapper[4809]: E0226 14:44:06.068945 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerName="nova-scheduler-scheduler" Feb 26 14:44:06 crc kubenswrapper[4809]: I0226 14:44:06.272244 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec3502a-e50f-4840-a833-9e97d7649127" path="/var/lib/kubelet/pods/3ec3502a-e50f-4840-a833-9e97d7649127/volumes" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.503707 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.571336 4809 generic.go:334] "Generic (PLEG): container finished" podID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerID="b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368" exitCode=0 Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.571386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerDied","Data":"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368"} Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.571416 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c33371c-2bff-4e5e-8f92-c99583b54d6a","Type":"ContainerDied","Data":"44a5c7754478d849d95d4a4ac261689628ecfe56a3601b46df84f82d4626342e"} Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.571439 4809 scope.go:117] "RemoveContainer" containerID="b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.571842 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.600217 4809 scope.go:117] "RemoveContainer" containerID="918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.611888 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz8hx\" (UniqueName: \"kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx\") pod \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.612036 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle\") pod \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.612168 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs\") pod \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.612231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs\") pod \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.612259 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data\") pod \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\" (UID: \"4c33371c-2bff-4e5e-8f92-c99583b54d6a\") " Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.614046 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs" (OuterVolumeSpecName: "logs") pod "4c33371c-2bff-4e5e-8f92-c99583b54d6a" (UID: "4c33371c-2bff-4e5e-8f92-c99583b54d6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.632871 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx" (OuterVolumeSpecName: "kube-api-access-cz8hx") pod "4c33371c-2bff-4e5e-8f92-c99583b54d6a" (UID: "4c33371c-2bff-4e5e-8f92-c99583b54d6a"). InnerVolumeSpecName "kube-api-access-cz8hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.652608 4809 scope.go:117] "RemoveContainer" containerID="b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368" Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.653186 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368\": container with ID starting with b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368 not found: ID does not exist" containerID="b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.653236 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368"} err="failed to get container status \"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368\": rpc error: code = NotFound desc = could not find container \"b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368\": container with ID starting with b730219fda40f5e3890b0311136729c7ec24419f5ab7fd6e38a442c2ec635368 not found: ID does not exist" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.653262 4809 scope.go:117] "RemoveContainer" containerID="918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e" Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.653559 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e\": container with ID starting with 918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e not found: ID does not exist" containerID="918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.653658 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e"} err="failed to get container status \"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e\": rpc error: code = NotFound desc = could not find container \"918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e\": container with ID starting with 918753a2dce52b1e0b9f39263b83664734fe3c658f57a56519b2a014872f2a9e not found: ID does not exist" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.668289 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data" (OuterVolumeSpecName: "config-data") pod "4c33371c-2bff-4e5e-8f92-c99583b54d6a" (UID: "4c33371c-2bff-4e5e-8f92-c99583b54d6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.669904 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c33371c-2bff-4e5e-8f92-c99583b54d6a" (UID: "4c33371c-2bff-4e5e-8f92-c99583b54d6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.701561 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c33371c-2bff-4e5e-8f92-c99583b54d6a" (UID: "4c33371c-2bff-4e5e-8f92-c99583b54d6a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.715142 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c33371c-2bff-4e5e-8f92-c99583b54d6a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.715175 4809 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.715186 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.715195 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz8hx\" (UniqueName: \"kubernetes.io/projected/4c33371c-2bff-4e5e-8f92-c99583b54d6a-kube-api-access-cz8hx\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.715203 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c33371c-2bff-4e5e-8f92-c99583b54d6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.908190 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.920357 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.935487 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.936244 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4145881d-ecb4-4082-9d47-09915db05fb6" containerName="nova-manage" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.936312 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4145881d-ecb4-4082-9d47-09915db05fb6" containerName="nova-manage" Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.936366 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-log" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.936415 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-log" Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.936506 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b502e6-3aba-436c-b8c2-ef8a4d18e607" containerName="oc" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.936563 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b502e6-3aba-436c-b8c2-ef8a4d18e607" containerName="oc" Feb 26 14:44:08 crc kubenswrapper[4809]: E0226 14:44:08.936634 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-metadata" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.936692 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-metadata" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.936933 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-log" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.937032 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4145881d-ecb4-4082-9d47-09915db05fb6" containerName="nova-manage" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.937100 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" containerName="nova-metadata-metadata" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.937163 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b502e6-3aba-436c-b8c2-ef8a4d18e607" containerName="oc" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.938682 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.943708 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.943953 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 14:44:08 crc kubenswrapper[4809]: I0226 14:44:08.959348 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.024148 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.024351 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.024429 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ncn5\" (UniqueName: \"kubernetes.io/projected/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-kube-api-access-6ncn5\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.024680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-config-data\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.024711 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-logs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.127789 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ncn5\" (UniqueName: \"kubernetes.io/projected/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-kube-api-access-6ncn5\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.128118 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-config-data\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.128250 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-logs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.128588 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.128768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.129126 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-logs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.133488 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.133511 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.134158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-config-data\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.144072 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ncn5\" (UniqueName: \"kubernetes.io/projected/e4e66982-31ee-45ee-9e2f-60fb4d8e24fe-kube-api-access-6ncn5\") pod \"nova-metadata-0\" (UID: \"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe\") " pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.294745 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 14:44:09 crc kubenswrapper[4809]: W0226 14:44:09.767404 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4e66982_31ee_45ee_9e2f_60fb4d8e24fe.slice/crio-a37a57eadf014b564f5bfd1f7e0b120447d8ea2003b835885f68ef7d9d1a6450 WatchSource:0}: Error finding container a37a57eadf014b564f5bfd1f7e0b120447d8ea2003b835885f68ef7d9d1a6450: Status 404 returned error can't find the container with id a37a57eadf014b564f5bfd1f7e0b120447d8ea2003b835885f68ef7d9d1a6450 Feb 26 14:44:09 crc kubenswrapper[4809]: I0226 14:44:09.767418 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.274072 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c33371c-2bff-4e5e-8f92-c99583b54d6a" path="/var/lib/kubelet/pods/4c33371c-2bff-4e5e-8f92-c99583b54d6a/volumes" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.543129 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.563150 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb4hp\" (UniqueName: \"kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp\") pod \"6a668464-1cf9-492a-8e9a-4f712b7c854c\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.563292 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data\") pod \"6a668464-1cf9-492a-8e9a-4f712b7c854c\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.563401 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle\") pod \"6a668464-1cf9-492a-8e9a-4f712b7c854c\" (UID: \"6a668464-1cf9-492a-8e9a-4f712b7c854c\") " Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.573357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp" (OuterVolumeSpecName: "kube-api-access-tb4hp") pod "6a668464-1cf9-492a-8e9a-4f712b7c854c" (UID: "6a668464-1cf9-492a-8e9a-4f712b7c854c"). InnerVolumeSpecName "kube-api-access-tb4hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.629295 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe","Type":"ContainerStarted","Data":"4df7313021b5bf9d85f34dbbf3368f00f7019688389afed69ae0547c32e53261"} Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.629342 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe","Type":"ContainerStarted","Data":"7d166b36e42486c78f0360176e01ac4a3d23a904687d55ca6b856b4e696b5ced"} Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.629357 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e66982-31ee-45ee-9e2f-60fb4d8e24fe","Type":"ContainerStarted","Data":"a37a57eadf014b564f5bfd1f7e0b120447d8ea2003b835885f68ef7d9d1a6450"} Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.646973 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data" (OuterVolumeSpecName: "config-data") pod "6a668464-1cf9-492a-8e9a-4f712b7c854c" (UID: "6a668464-1cf9-492a-8e9a-4f712b7c854c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647215 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a668464-1cf9-492a-8e9a-4f712b7c854c" (UID: "6a668464-1cf9-492a-8e9a-4f712b7c854c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647422 4809 generic.go:334] "Generic (PLEG): container finished" podID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" exitCode=0 Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647515 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a668464-1cf9-492a-8e9a-4f712b7c854c","Type":"ContainerDied","Data":"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52"} Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a668464-1cf9-492a-8e9a-4f712b7c854c","Type":"ContainerDied","Data":"69d8eb29a588cab4f1e0d9a384680ac94e9a766adf795b5d41fd876ae49481c8"} Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647658 4809 scope.go:117] "RemoveContainer" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.647809 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.667611 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb4hp\" (UniqueName: \"kubernetes.io/projected/6a668464-1cf9-492a-8e9a-4f712b7c854c-kube-api-access-tb4hp\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.667646 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.667655 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a668464-1cf9-492a-8e9a-4f712b7c854c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.694052 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.694012481 podStartE2EDuration="2.694012481s" podCreationTimestamp="2026-02-26 14:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:44:10.665453873 +0000 UTC m=+1829.138774396" watchObservedRunningTime="2026-02-26 14:44:10.694012481 +0000 UTC m=+1829.167333004" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.788609 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.789486 4809 scope.go:117] "RemoveContainer" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" Feb 26 14:44:10 crc kubenswrapper[4809]: E0226 14:44:10.790223 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52\": container with ID starting with 5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52 not found: ID does not exist" containerID="5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.790259 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52"} err="failed to get container status \"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52\": rpc error: code = NotFound desc = could not find container \"5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52\": container with ID starting with 5ecf00a7efedf59f6f4d63ff7655b64983e1de4bdbcd10420c5a7f8ea0ed9e52 not found: ID does not exist" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.810074 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.826864 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:10 crc kubenswrapper[4809]: E0226 14:44:10.827521 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerName="nova-scheduler-scheduler" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.827539 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerName="nova-scheduler-scheduler" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.827911 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" containerName="nova-scheduler-scheduler" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.828838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.834986 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.840913 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.900890 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-config-data\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.901002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlt9\" (UniqueName: \"kubernetes.io/projected/3a1d46ba-37f0-43d7-94a0-bea208549a22-kube-api-access-dtlt9\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:10 crc kubenswrapper[4809]: I0226 14:44:10.901074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.002837 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-config-data\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.002963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlt9\" (UniqueName: \"kubernetes.io/projected/3a1d46ba-37f0-43d7-94a0-bea208549a22-kube-api-access-dtlt9\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.003046 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.007400 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.007417 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1d46ba-37f0-43d7-94a0-bea208549a22-config-data\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.027191 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlt9\" (UniqueName: \"kubernetes.io/projected/3a1d46ba-37f0-43d7-94a0-bea208549a22-kube-api-access-dtlt9\") pod \"nova-scheduler-0\" (UID: \"3a1d46ba-37f0-43d7-94a0-bea208549a22\") " pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.156245 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.258131 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:44:11 crc kubenswrapper[4809]: E0226 14:44:11.258661 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.664996 4809 generic.go:334] "Generic (PLEG): container finished" podID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerID="c3346d517dd7b8f70001d33a4f5a3e2f74afa3bda045be3fbd58f88c7f298b3b" exitCode=0 Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.665046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerDied","Data":"c3346d517dd7b8f70001d33a4f5a3e2f74afa3bda045be3fbd58f88c7f298b3b"} Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.831995 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 14:44:11 crc kubenswrapper[4809]: I0226 14:44:11.903308 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028134 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028227 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028264 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028343 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028398 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrbq\" (UniqueName: \"kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028473 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs\") pod \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\" (UID: \"6b380ede-96f3-4e06-8da8-0e8ef9301a31\") " Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.028806 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs" (OuterVolumeSpecName: "logs") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.033620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq" (OuterVolumeSpecName: "kube-api-access-rqrbq") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "kube-api-access-rqrbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.065270 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data" (OuterVolumeSpecName: "config-data") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.069459 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.089847 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.090802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6b380ede-96f3-4e06-8da8-0e8ef9301a31" (UID: "6b380ede-96f3-4e06-8da8-0e8ef9301a31"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131908 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131936 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131945 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131969 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrbq\" (UniqueName: \"kubernetes.io/projected/6b380ede-96f3-4e06-8da8-0e8ef9301a31-kube-api-access-rqrbq\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131980 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b380ede-96f3-4e06-8da8-0e8ef9301a31-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.131988 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b380ede-96f3-4e06-8da8-0e8ef9301a31-logs\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.270308 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a668464-1cf9-492a-8e9a-4f712b7c854c" path="/var/lib/kubelet/pods/6a668464-1cf9-492a-8e9a-4f712b7c854c/volumes" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.684319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a1d46ba-37f0-43d7-94a0-bea208549a22","Type":"ContainerStarted","Data":"98d9591fd6a705f7f76b7b074fcd24a47a7c9651c57e5fda733d5150622d9c66"} Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.684676 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a1d46ba-37f0-43d7-94a0-bea208549a22","Type":"ContainerStarted","Data":"bb708948ffe90107168f48b18a151c27e76cabfdb6f3b8ad915d0a697d1682fe"} Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.689131 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b380ede-96f3-4e06-8da8-0e8ef9301a31","Type":"ContainerDied","Data":"cabf9be1a9f54bbf354ee8c1f6fd23992b84a2f0464c6dc9370982dd9b1f82e0"} Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.689186 4809 scope.go:117] "RemoveContainer" containerID="c3346d517dd7b8f70001d33a4f5a3e2f74afa3bda045be3fbd58f88c7f298b3b" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.689325 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.731390 4809 scope.go:117] "RemoveContainer" containerID="136134d905bda1d3f127af74a24180246adcfc03c461919164b8267353afcac6" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.739679 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.739660213 podStartE2EDuration="2.739660213s" podCreationTimestamp="2026-02-26 14:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:44:12.712416822 +0000 UTC m=+1831.185737405" watchObservedRunningTime="2026-02-26 14:44:12.739660213 +0000 UTC m=+1831.212980736" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.761038 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.777275 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.803183 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:12 crc kubenswrapper[4809]: E0226 14:44:12.803791 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-api" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.803816 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-api" Feb 26 14:44:12 crc kubenswrapper[4809]: E0226 14:44:12.803851 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-log" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.803861 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-log" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.804109 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-api" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.804147 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" containerName="nova-api-log" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.805421 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.809364 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.809469 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.809577 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.814340 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.950811 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.950909 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.950947 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-config-data\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.952905 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.952973 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjkr\" (UniqueName: \"kubernetes.io/projected/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-kube-api-access-qxjkr\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:12 crc kubenswrapper[4809]: I0226 14:44:12.953362 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-logs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.055700 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.055756 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjkr\" (UniqueName: \"kubernetes.io/projected/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-kube-api-access-qxjkr\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.055889 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-logs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.055933 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.055998 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.056069 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-config-data\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.056894 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-logs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.060873 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.061491 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.063675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-config-data\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.066553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.094187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjkr\" (UniqueName: \"kubernetes.io/projected/5a8e2401-9bad-4dce-80b6-b76f9b1f07b1-kube-api-access-qxjkr\") pod \"nova-api-0\" (UID: \"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1\") " pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.129867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.642288 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 14:44:13 crc kubenswrapper[4809]: W0226 14:44:13.647716 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a8e2401_9bad_4dce_80b6_b76f9b1f07b1.slice/crio-26ca2688a379ff6cf21dc6588ab0b7ce29f10168733ca83a029e38e9deebae39 WatchSource:0}: Error finding container 26ca2688a379ff6cf21dc6588ab0b7ce29f10168733ca83a029e38e9deebae39: Status 404 returned error can't find the container with id 26ca2688a379ff6cf21dc6588ab0b7ce29f10168733ca83a029e38e9deebae39 Feb 26 14:44:13 crc kubenswrapper[4809]: I0226 14:44:13.703831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1","Type":"ContainerStarted","Data":"26ca2688a379ff6cf21dc6588ab0b7ce29f10168733ca83a029e38e9deebae39"} Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.270551 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b380ede-96f3-4e06-8da8-0e8ef9301a31" path="/var/lib/kubelet/pods/6b380ede-96f3-4e06-8da8-0e8ef9301a31/volumes" Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.295741 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.298280 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.723055 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1","Type":"ContainerStarted","Data":"0fcc42c7a71a358adb5a140bf888984f04542b43ab58d4fd245682430e4c8f7c"} Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.723122 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a8e2401-9bad-4dce-80b6-b76f9b1f07b1","Type":"ContainerStarted","Data":"d6e0f3389c74913a2bf6329c477515dcf5f1f3686ce5e0099ae956eab47b9c18"} Feb 26 14:44:14 crc kubenswrapper[4809]: I0226 14:44:14.757474 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.757453038 podStartE2EDuration="2.757453038s" podCreationTimestamp="2026-02-26 14:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:44:14.749704416 +0000 UTC m=+1833.223024949" watchObservedRunningTime="2026-02-26 14:44:14.757453038 +0000 UTC m=+1833.230773561" Feb 26 14:44:16 crc kubenswrapper[4809]: I0226 14:44:16.157355 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 14:44:19 crc kubenswrapper[4809]: I0226 14:44:19.294950 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 14:44:19 crc kubenswrapper[4809]: I0226 14:44:19.295570 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 14:44:20 crc kubenswrapper[4809]: I0226 14:44:20.309202 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4e66982-31ee-45ee-9e2f-60fb4d8e24fe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:44:20 crc kubenswrapper[4809]: I0226 14:44:20.309891 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4e66982-31ee-45ee-9e2f-60fb4d8e24fe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:44:21 crc kubenswrapper[4809]: I0226 14:44:21.157360 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 14:44:21 crc kubenswrapper[4809]: I0226 14:44:21.200571 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 14:44:21 crc kubenswrapper[4809]: I0226 14:44:21.845497 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 14:44:22 crc kubenswrapper[4809]: I0226 14:44:22.256779 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:44:22 crc kubenswrapper[4809]: E0226 14:44:22.257388 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:44:23 crc kubenswrapper[4809]: I0226 14:44:23.130629 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:44:23 crc kubenswrapper[4809]: I0226 14:44:23.130683 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 14:44:23 crc kubenswrapper[4809]: I0226 14:44:23.782723 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 14:44:24 crc kubenswrapper[4809]: I0226 14:44:24.149234 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a8e2401-9bad-4dce-80b6-b76f9b1f07b1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 14:44:24 crc kubenswrapper[4809]: I0226 14:44:24.149267 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a8e2401-9bad-4dce-80b6-b76f9b1f07b1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.774171 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.774764 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" containerName="kube-state-metrics" containerID="cri-o://6a4c4cf1ed575464012fc20bc2a7cf0933298c8246b9e5e93716963d196cf9d0" gracePeriod=30 Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.846583 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.847048 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" containerName="mysqld-exporter" containerID="cri-o://ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b" gracePeriod=30 Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.909265 4809 generic.go:334] "Generic (PLEG): container finished" podID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" containerID="6a4c4cf1ed575464012fc20bc2a7cf0933298c8246b9e5e93716963d196cf9d0" exitCode=2 Feb 26 14:44:27 crc kubenswrapper[4809]: I0226 14:44:27.909306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc","Type":"ContainerDied","Data":"6a4c4cf1ed575464012fc20bc2a7cf0933298c8246b9e5e93716963d196cf9d0"} Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.515153 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.523343 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.580464 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dl8r\" (UniqueName: \"kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r\") pod \"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc\" (UID: \"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc\") " Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.592797 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r" (OuterVolumeSpecName: "kube-api-access-5dl8r") pod "3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" (UID: "3bc4dc43-d109-406e-9cb8-f8d4cb2214bc"). InnerVolumeSpecName "kube-api-access-5dl8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.683915 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle\") pod \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.684370 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data\") pod \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.684614 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gjkd\" (UniqueName: \"kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd\") pod \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\" (UID: \"4b3f6c49-8612-45fc-af31-6ff2c2201c2e\") " Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.686031 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dl8r\" (UniqueName: \"kubernetes.io/projected/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc-kube-api-access-5dl8r\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.688971 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd" (OuterVolumeSpecName: "kube-api-access-7gjkd") pod "4b3f6c49-8612-45fc-af31-6ff2c2201c2e" (UID: "4b3f6c49-8612-45fc-af31-6ff2c2201c2e"). InnerVolumeSpecName "kube-api-access-7gjkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.731314 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b3f6c49-8612-45fc-af31-6ff2c2201c2e" (UID: "4b3f6c49-8612-45fc-af31-6ff2c2201c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.766361 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data" (OuterVolumeSpecName: "config-data") pod "4b3f6c49-8612-45fc-af31-6ff2c2201c2e" (UID: "4b3f6c49-8612-45fc-af31-6ff2c2201c2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.788075 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gjkd\" (UniqueName: \"kubernetes.io/projected/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-kube-api-access-7gjkd\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.788110 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.788120 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3f6c49-8612-45fc-af31-6ff2c2201c2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.921832 4809 generic.go:334] "Generic (PLEG): container finished" podID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" containerID="ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b" exitCode=2 Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.921885 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4b3f6c49-8612-45fc-af31-6ff2c2201c2e","Type":"ContainerDied","Data":"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b"} Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.921923 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.921945 4809 scope.go:117] "RemoveContainer" containerID="ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.921932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4b3f6c49-8612-45fc-af31-6ff2c2201c2e","Type":"ContainerDied","Data":"413507516be118ba96a0fb97322b0d02174db3fb26ef85085c473107bbef3376"} Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.924080 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bc4dc43-d109-406e-9cb8-f8d4cb2214bc","Type":"ContainerDied","Data":"4414b93965dbf4b7141982b5e1856273b67c380be8763e50988b33380d9af11e"} Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.924192 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.964620 4809 scope.go:117] "RemoveContainer" containerID="ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b" Feb 26 14:44:28 crc kubenswrapper[4809]: E0226 14:44:28.964994 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b\": container with ID starting with ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b not found: ID does not exist" containerID="ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.965039 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b"} err="failed to get container status \"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b\": rpc error: code = NotFound desc = could not find container \"ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b\": container with ID starting with ff00f8952fc312a97c4bdf1fde2e2f4043f2ef38ca93374452251805c2efee5b not found: ID does not exist" Feb 26 14:44:28 crc kubenswrapper[4809]: I0226 14:44:28.965062 4809 scope.go:117] "RemoveContainer" containerID="6a4c4cf1ed575464012fc20bc2a7cf0933298c8246b9e5e93716963d196cf9d0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.060048 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.073005 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: E0226 14:44:29.075240 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b3f6c49_8612_45fc_af31_6ff2c2201c2e.slice/crio-413507516be118ba96a0fb97322b0d02174db3fb26ef85085c473107bbef3376\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b3f6c49_8612_45fc_af31_6ff2c2201c2e.slice\": RecentStats: unable to find data in memory cache]" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.084719 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.096350 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: E0226 14:44:29.096787 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" containerName="kube-state-metrics" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.096805 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" containerName="kube-state-metrics" Feb 26 14:44:29 crc kubenswrapper[4809]: E0226 14:44:29.096817 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" containerName="mysqld-exporter" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.096823 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" containerName="mysqld-exporter" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.097091 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" containerName="mysqld-exporter" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.097119 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" containerName="kube-state-metrics" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.097841 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.105631 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.105649 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.110998 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.124760 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.138837 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.140136 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.141850 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.142073 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.153553 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.198726 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.198851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wwb\" (UniqueName: \"kubernetes.io/projected/c3376572-be7f-494e-a652-045bf9fc9f06-kube-api-access-28wwb\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.198904 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.198983 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-config-data\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.299876 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.299959 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.301459 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fj8x\" (UniqueName: \"kubernetes.io/projected/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-api-access-4fj8x\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.301590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wwb\" (UniqueName: \"kubernetes.io/projected/c3376572-be7f-494e-a652-045bf9fc9f06-kube-api-access-28wwb\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.301774 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.301943 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-config-data\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.302235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.302455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.302620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.302757 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.306516 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.307500 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-config-data\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.308005 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.313343 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.313727 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3376572-be7f-494e-a652-045bf9fc9f06-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.332568 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wwb\" (UniqueName: \"kubernetes.io/projected/c3376572-be7f-494e-a652-045bf9fc9f06-kube-api-access-28wwb\") pod \"mysqld-exporter-0\" (UID: \"c3376572-be7f-494e-a652-045bf9fc9f06\") " pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.405182 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.405237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.405265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fj8x\" (UniqueName: \"kubernetes.io/projected/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-api-access-4fj8x\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.405442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.412351 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.412704 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.425709 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.425725 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763bd9-0040-4c8b-996b-e837d320ab67-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.434315 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fj8x\" (UniqueName: \"kubernetes.io/projected/0c763bd9-0040-4c8b-996b-e837d320ab67-kube-api-access-4fj8x\") pod \"kube-state-metrics-0\" (UID: \"0c763bd9-0040-4c8b-996b-e837d320ab67\") " pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.458376 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.970863 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 14:44:29 crc kubenswrapper[4809]: W0226 14:44:29.976210 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3376572_be7f_494e_a652_045bf9fc9f06.slice/crio-bb501a775d160ef14ca882fbf7d07ca1d91c9cc6bffc5badb8951cbdea18acb5 WatchSource:0}: Error finding container bb501a775d160ef14ca882fbf7d07ca1d91c9cc6bffc5badb8951cbdea18acb5: Status 404 returned error can't find the container with id bb501a775d160ef14ca882fbf7d07ca1d91c9cc6bffc5badb8951cbdea18acb5 Feb 26 14:44:29 crc kubenswrapper[4809]: W0226 14:44:29.981958 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c763bd9_0040_4c8b_996b_e837d320ab67.slice/crio-31574f3281b941ad54a2890a2a2d24646034f437a15e84e628cda458960fa27e WatchSource:0}: Error finding container 31574f3281b941ad54a2890a2a2d24646034f437a15e84e628cda458960fa27e: Status 404 returned error can't find the container with id 31574f3281b941ad54a2890a2a2d24646034f437a15e84e628cda458960fa27e Feb 26 14:44:29 crc kubenswrapper[4809]: I0226 14:44:29.995844 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.090708 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.091200 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-central-agent" containerID="cri-o://8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1" gracePeriod=30 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.091208 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="proxy-httpd" containerID="cri-o://2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88" gracePeriod=30 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.091238 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="sg-core" containerID="cri-o://ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975" gracePeriod=30 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.091287 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-notification-agent" containerID="cri-o://d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56" gracePeriod=30 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.280962 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc4dc43-d109-406e-9cb8-f8d4cb2214bc" path="/var/lib/kubelet/pods/3bc4dc43-d109-406e-9cb8-f8d4cb2214bc/volumes" Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.282627 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3f6c49-8612-45fc-af31-6ff2c2201c2e" path="/var/lib/kubelet/pods/4b3f6c49-8612-45fc-af31-6ff2c2201c2e/volumes" Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.958143 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0c763bd9-0040-4c8b-996b-e837d320ab67","Type":"ContainerStarted","Data":"56578fdeb4464b47855e618033c484bd79b2f6dbfe49b958f8feed8e07c3d7fe"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.958195 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.958205 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0c763bd9-0040-4c8b-996b-e837d320ab67","Type":"ContainerStarted","Data":"31574f3281b941ad54a2890a2a2d24646034f437a15e84e628cda458960fa27e"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961567 4809 generic.go:334] "Generic (PLEG): container finished" podID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerID="2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88" exitCode=0 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961604 4809 generic.go:334] "Generic (PLEG): container finished" podID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerID="ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975" exitCode=2 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961613 4809 generic.go:334] "Generic (PLEG): container finished" podID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerID="8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1" exitCode=0 Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961663 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerDied","Data":"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961694 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerDied","Data":"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.961712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerDied","Data":"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.962959 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"c3376572-be7f-494e-a652-045bf9fc9f06","Type":"ContainerStarted","Data":"bb501a775d160ef14ca882fbf7d07ca1d91c9cc6bffc5badb8951cbdea18acb5"} Feb 26 14:44:30 crc kubenswrapper[4809]: I0226 14:44:30.985413 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.541291011 podStartE2EDuration="1.985393517s" podCreationTimestamp="2026-02-26 14:44:29 +0000 UTC" firstStartedPulling="2026-02-26 14:44:29.987098739 +0000 UTC m=+1848.460419262" lastFinishedPulling="2026-02-26 14:44:30.431201245 +0000 UTC m=+1848.904521768" observedRunningTime="2026-02-26 14:44:30.973152576 +0000 UTC m=+1849.446473099" watchObservedRunningTime="2026-02-26 14:44:30.985393517 +0000 UTC m=+1849.458714030" Feb 26 14:44:31 crc kubenswrapper[4809]: I0226 14:44:31.987225 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"c3376572-be7f-494e-a652-045bf9fc9f06","Type":"ContainerStarted","Data":"c7703bbc2763faa5c20da6baa95c5f926c2b2dbdab3adab652bad6214606fef5"} Feb 26 14:44:32 crc kubenswrapper[4809]: I0226 14:44:32.017793 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.196813596 podStartE2EDuration="3.017771202s" podCreationTimestamp="2026-02-26 14:44:29 +0000 UTC" firstStartedPulling="2026-02-26 14:44:29.981871079 +0000 UTC m=+1848.455191602" lastFinishedPulling="2026-02-26 14:44:30.802828685 +0000 UTC m=+1849.276149208" observedRunningTime="2026-02-26 14:44:32.007802176 +0000 UTC m=+1850.481122699" watchObservedRunningTime="2026-02-26 14:44:32.017771202 +0000 UTC m=+1850.491091725" Feb 26 14:44:33 crc kubenswrapper[4809]: I0226 14:44:33.136354 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 14:44:33 crc kubenswrapper[4809]: I0226 14:44:33.136902 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 14:44:33 crc kubenswrapper[4809]: I0226 14:44:33.137382 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 14:44:33 crc kubenswrapper[4809]: I0226 14:44:33.142725 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 14:44:34 crc kubenswrapper[4809]: I0226 14:44:34.018910 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 14:44:34 crc kubenswrapper[4809]: I0226 14:44:34.037217 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.257343 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:44:35 crc kubenswrapper[4809]: E0226 14:44:35.257927 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.626396 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.773708 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.773771 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.773828 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.773870 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swvz\" (UniqueName: \"kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.773926 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.774162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.774186 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle\") pod \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\" (UID: \"4cf8775d-ef8b-4f0a-ba47-b088e7331c65\") " Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.774449 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.774564 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.775285 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.775304 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.779778 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz" (OuterVolumeSpecName: "kube-api-access-2swvz") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "kube-api-access-2swvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.780277 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts" (OuterVolumeSpecName: "scripts") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.808116 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.878222 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.878724 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.878929 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swvz\" (UniqueName: \"kubernetes.io/projected/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-kube-api-access-2swvz\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.895147 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.895714 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data" (OuterVolumeSpecName: "config-data") pod "4cf8775d-ef8b-4f0a-ba47-b088e7331c65" (UID: "4cf8775d-ef8b-4f0a-ba47-b088e7331c65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.981658 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:35 crc kubenswrapper[4809]: I0226 14:44:35.982223 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf8775d-ef8b-4f0a-ba47-b088e7331c65-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.048742 4809 generic.go:334] "Generic (PLEG): container finished" podID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerID="d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56" exitCode=0 Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.048806 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerDied","Data":"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56"} Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.048858 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf8775d-ef8b-4f0a-ba47-b088e7331c65","Type":"ContainerDied","Data":"057de352ae1df56d2492518010258f16159d9cf2551bc5952a0d58bbcf06e950"} Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.048880 4809 scope.go:117] "RemoveContainer" containerID="2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.049454 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.085683 4809 scope.go:117] "RemoveContainer" containerID="ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.097210 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.109711 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.131935 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.132274 4809 scope.go:117] "RemoveContainer" containerID="d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.133080 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-notification-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133177 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-notification-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.133257 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="proxy-httpd" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133326 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="proxy-httpd" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.133413 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="sg-core" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133467 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="sg-core" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.133550 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-central-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133603 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-central-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133867 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-central-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.133946 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="ceilometer-notification-agent" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.134005 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="sg-core" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.134094 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" containerName="proxy-httpd" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.136934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.140352 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.140930 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.141350 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.151340 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.182284 4809 scope.go:117] "RemoveContainer" containerID="8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.212271 4809 scope.go:117] "RemoveContainer" containerID="2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.213495 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88\": container with ID starting with 2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88 not found: ID does not exist" containerID="2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.213542 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88"} err="failed to get container status \"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88\": rpc error: code = NotFound desc = could not find container \"2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88\": container with ID starting with 2c0bb4492db6010f45e02f8716285753ccb39ca2eb44d0bdef69afc9ad6e7d88 not found: ID does not exist" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.213571 4809 scope.go:117] "RemoveContainer" containerID="ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.214004 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975\": container with ID starting with ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975 not found: ID does not exist" containerID="ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.214070 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975"} err="failed to get container status \"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975\": rpc error: code = NotFound desc = could not find container \"ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975\": container with ID starting with ac5cab0f77d435db0cf21d9eb2d11b86f6231ef7d06a151db27e66aa3d876975 not found: ID does not exist" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.214106 4809 scope.go:117] "RemoveContainer" containerID="d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.215553 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56\": container with ID starting with d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56 not found: ID does not exist" containerID="d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.215661 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56"} err="failed to get container status \"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56\": rpc error: code = NotFound desc = could not find container \"d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56\": container with ID starting with d80ee130f2d3bac6249fadf78f705960f224f3bdad846f4045e5a9957f0edc56 not found: ID does not exist" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.215738 4809 scope.go:117] "RemoveContainer" containerID="8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1" Feb 26 14:44:36 crc kubenswrapper[4809]: E0226 14:44:36.216269 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1\": container with ID starting with 8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1 not found: ID does not exist" containerID="8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.216349 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1"} err="failed to get container status \"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1\": rpc error: code = NotFound desc = could not find container \"8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1\": container with ID starting with 8479aed023bb2f7e3ec3af6e1e3e8cb2a3f7592c8744ed68db355d25290c25d1 not found: ID does not exist" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.279368 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf8775d-ef8b-4f0a-ba47-b088e7331c65" path="/var/lib/kubelet/pods/4cf8775d-ef8b-4f0a-ba47-b088e7331c65/volumes" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.289221 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.289271 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzbv\" (UniqueName: \"kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.289311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.289778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.290306 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.290743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.290921 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.290968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.392794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.392938 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393100 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393136 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfzbv\" (UniqueName: \"kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.393293 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.394151 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.394321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.397840 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.399359 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.399822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.402002 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.403847 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.414568 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfzbv\" (UniqueName: \"kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv\") pod \"ceilometer-0\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.473605 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:44:36 crc kubenswrapper[4809]: I0226 14:44:36.995589 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:44:37 crc kubenswrapper[4809]: I0226 14:44:37.063421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerStarted","Data":"6461bbf50d643c93643748e4ce5ed11fa173d389fe3a09c54a9278a0be93d8d9"} Feb 26 14:44:38 crc kubenswrapper[4809]: I0226 14:44:38.112482 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerStarted","Data":"f1dca72ed800b73ddf1050a01afeefeb5ba311539883d6f28019843f4fb822cc"} Feb 26 14:44:39 crc kubenswrapper[4809]: I0226 14:44:39.126519 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerStarted","Data":"ba7b43be433fd6e67478691f66c675fec15bf7df7f640ebb331e6c8696a7d4c6"} Feb 26 14:44:39 crc kubenswrapper[4809]: I0226 14:44:39.469925 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 14:44:41 crc kubenswrapper[4809]: I0226 14:44:41.152745 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerStarted","Data":"17e44fecce78001640b6db51480f0fa28dc286a09625cc1b1b910fc63f3bdfb4"} Feb 26 14:44:42 crc kubenswrapper[4809]: I0226 14:44:42.166311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerStarted","Data":"86aacd17c774ea510cb52539fb46f17afd6b95df3906db85f955867fa3443ae1"} Feb 26 14:44:42 crc kubenswrapper[4809]: I0226 14:44:42.170804 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:44:42 crc kubenswrapper[4809]: I0226 14:44:42.198483 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.485510074 podStartE2EDuration="6.198465503s" podCreationTimestamp="2026-02-26 14:44:36 +0000 UTC" firstStartedPulling="2026-02-26 14:44:36.987717048 +0000 UTC m=+1855.461037611" lastFinishedPulling="2026-02-26 14:44:41.700672517 +0000 UTC m=+1860.173993040" observedRunningTime="2026-02-26 14:44:42.195942631 +0000 UTC m=+1860.669263154" watchObservedRunningTime="2026-02-26 14:44:42.198465503 +0000 UTC m=+1860.671786026" Feb 26 14:44:46 crc kubenswrapper[4809]: I0226 14:44:46.257571 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:44:46 crc kubenswrapper[4809]: E0226 14:44:46.259594 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:44:57 crc kubenswrapper[4809]: I0226 14:44:57.258570 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:44:57 crc kubenswrapper[4809]: E0226 14:44:57.260612 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.165315 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp"] Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.168552 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.171455 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.171702 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.177421 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp"] Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.206023 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.206091 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcntd\" (UniqueName: \"kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.206139 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.308203 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.308323 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcntd\" (UniqueName: \"kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.308770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.309472 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.317644 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.328249 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcntd\" (UniqueName: \"kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd\") pod \"collect-profiles-29535285-87tzp\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:00 crc kubenswrapper[4809]: I0226 14:45:00.498153 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:01 crc kubenswrapper[4809]: W0226 14:45:01.008168 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac336caa_9f34_4637_a9a8_acd1690cfa57.slice/crio-d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f WatchSource:0}: Error finding container d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f: Status 404 returned error can't find the container with id d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f Feb 26 14:45:01 crc kubenswrapper[4809]: I0226 14:45:01.020211 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp"] Feb 26 14:45:01 crc kubenswrapper[4809]: I0226 14:45:01.442901 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" event={"ID":"ac336caa-9f34-4637-a9a8-acd1690cfa57","Type":"ContainerStarted","Data":"41a46e6e938e39f69e8d996cf838ce36e6c2e6a2ddaa73e5c7d1447b52cc37f2"} Feb 26 14:45:01 crc kubenswrapper[4809]: I0226 14:45:01.443341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" event={"ID":"ac336caa-9f34-4637-a9a8-acd1690cfa57","Type":"ContainerStarted","Data":"d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f"} Feb 26 14:45:01 crc kubenswrapper[4809]: I0226 14:45:01.469104 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" podStartSLOduration=1.4690808180000001 podStartE2EDuration="1.469080818s" podCreationTimestamp="2026-02-26 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:45:01.455834509 +0000 UTC m=+1879.929155032" watchObservedRunningTime="2026-02-26 14:45:01.469080818 +0000 UTC m=+1879.942401341" Feb 26 14:45:02 crc kubenswrapper[4809]: I0226 14:45:02.469174 4809 generic.go:334] "Generic (PLEG): container finished" podID="ac336caa-9f34-4637-a9a8-acd1690cfa57" containerID="41a46e6e938e39f69e8d996cf838ce36e6c2e6a2ddaa73e5c7d1447b52cc37f2" exitCode=0 Feb 26 14:45:02 crc kubenswrapper[4809]: I0226 14:45:02.469244 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" event={"ID":"ac336caa-9f34-4637-a9a8-acd1690cfa57","Type":"ContainerDied","Data":"41a46e6e938e39f69e8d996cf838ce36e6c2e6a2ddaa73e5c7d1447b52cc37f2"} Feb 26 14:45:03 crc kubenswrapper[4809]: I0226 14:45:03.899889 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.019899 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcntd\" (UniqueName: \"kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd\") pod \"ac336caa-9f34-4637-a9a8-acd1690cfa57\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.020178 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume\") pod \"ac336caa-9f34-4637-a9a8-acd1690cfa57\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.020222 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume\") pod \"ac336caa-9f34-4637-a9a8-acd1690cfa57\" (UID: \"ac336caa-9f34-4637-a9a8-acd1690cfa57\") " Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.021898 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac336caa-9f34-4637-a9a8-acd1690cfa57" (UID: "ac336caa-9f34-4637-a9a8-acd1690cfa57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.028445 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd" (OuterVolumeSpecName: "kube-api-access-lcntd") pod "ac336caa-9f34-4637-a9a8-acd1690cfa57" (UID: "ac336caa-9f34-4637-a9a8-acd1690cfa57"). InnerVolumeSpecName "kube-api-access-lcntd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.030296 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac336caa-9f34-4637-a9a8-acd1690cfa57" (UID: "ac336caa-9f34-4637-a9a8-acd1690cfa57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.122805 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcntd\" (UniqueName: \"kubernetes.io/projected/ac336caa-9f34-4637-a9a8-acd1690cfa57-kube-api-access-lcntd\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.122842 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac336caa-9f34-4637-a9a8-acd1690cfa57-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.122854 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac336caa-9f34-4637-a9a8-acd1690cfa57-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.535594 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" event={"ID":"ac336caa-9f34-4637-a9a8-acd1690cfa57","Type":"ContainerDied","Data":"d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f"} Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.535640 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4a3c0e3548a741bc591866f3b129b6af7afdce616bdf38210251006ad0afb5f" Feb 26 14:45:04 crc kubenswrapper[4809]: I0226 14:45:04.535699 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp" Feb 26 14:45:06 crc kubenswrapper[4809]: I0226 14:45:06.327462 4809 scope.go:117] "RemoveContainer" containerID="2ac4aad9a38a72b0914ba782af54c44bf5a3aaab2af74c2d3c2207aad8b147d6" Feb 26 14:45:06 crc kubenswrapper[4809]: I0226 14:45:06.387808 4809 scope.go:117] "RemoveContainer" containerID="004eb8f9928a5d772284cd399142c60f678e7b1c2a32077f4b1e07bafc1d1330" Feb 26 14:45:06 crc kubenswrapper[4809]: I0226 14:45:06.410793 4809 scope.go:117] "RemoveContainer" containerID="478493260f335a9117c4aa7e88a8a4e2074736a6d1e2e2da6352e7cdc789eabd" Feb 26 14:45:06 crc kubenswrapper[4809]: I0226 14:45:06.473510 4809 scope.go:117] "RemoveContainer" containerID="57d74010c098aeee06822bac5ed3fd7d4b634fd26a6ae2d59a1cdecac1ffa85c" Feb 26 14:45:06 crc kubenswrapper[4809]: I0226 14:45:06.483781 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 14:45:08 crc kubenswrapper[4809]: I0226 14:45:08.257244 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:45:08 crc kubenswrapper[4809]: E0226 14:45:08.258233 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.602606 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-pph48"] Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.616543 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-pph48"] Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.717166 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-mnms7"] Feb 26 14:45:18 crc kubenswrapper[4809]: E0226 14:45:18.717849 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac336caa-9f34-4637-a9a8-acd1690cfa57" containerName="collect-profiles" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.717880 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac336caa-9f34-4637-a9a8-acd1690cfa57" containerName="collect-profiles" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.718190 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac336caa-9f34-4637-a9a8-acd1690cfa57" containerName="collect-profiles" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.719252 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.733983 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mnms7"] Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.859496 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.859912 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.859982 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665n8\" (UniqueName: \"kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.962506 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.962640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665n8\" (UniqueName: \"kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.962816 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.968055 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:18 crc kubenswrapper[4809]: I0226 14:45:18.971533 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:19 crc kubenswrapper[4809]: I0226 14:45:19.003737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665n8\" (UniqueName: \"kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8\") pod \"heat-db-sync-mnms7\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:19 crc kubenswrapper[4809]: I0226 14:45:19.070491 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mnms7" Feb 26 14:45:19 crc kubenswrapper[4809]: I0226 14:45:19.678528 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mnms7"] Feb 26 14:45:19 crc kubenswrapper[4809]: I0226 14:45:19.798550 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mnms7" event={"ID":"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd","Type":"ContainerStarted","Data":"41a38271df404556e54e3ed6a8b7be7da89a25a6621d1a9a8d411656774945ee"} Feb 26 14:45:20 crc kubenswrapper[4809]: I0226 14:45:20.278564 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84499f28-1908-4654-b0bc-a6961f49bb57" path="/var/lib/kubelet/pods/84499f28-1908-4654-b0bc-a6961f49bb57/volumes" Feb 26 14:45:20 crc kubenswrapper[4809]: I0226 14:45:20.817957 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.478887 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.479447 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-central-agent" containerID="cri-o://f1dca72ed800b73ddf1050a01afeefeb5ba311539883d6f28019843f4fb822cc" gracePeriod=30 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.479953 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="proxy-httpd" containerID="cri-o://86aacd17c774ea510cb52539fb46f17afd6b95df3906db85f955867fa3443ae1" gracePeriod=30 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.480044 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-notification-agent" containerID="cri-o://ba7b43be433fd6e67478691f66c675fec15bf7df7f640ebb331e6c8696a7d4c6" gracePeriod=30 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.480138 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="sg-core" containerID="cri-o://17e44fecce78001640b6db51480f0fa28dc286a09625cc1b1b910fc63f3bdfb4" gracePeriod=30 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.864540 4809 generic.go:334] "Generic (PLEG): container finished" podID="52175667-e934-4c12-a6f0-a05c5006d789" containerID="86aacd17c774ea510cb52539fb46f17afd6b95df3906db85f955867fa3443ae1" exitCode=0 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.864868 4809 generic.go:334] "Generic (PLEG): container finished" podID="52175667-e934-4c12-a6f0-a05c5006d789" containerID="17e44fecce78001640b6db51480f0fa28dc286a09625cc1b1b910fc63f3bdfb4" exitCode=2 Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.864763 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerDied","Data":"86aacd17c774ea510cb52539fb46f17afd6b95df3906db85f955867fa3443ae1"} Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.864907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerDied","Data":"17e44fecce78001640b6db51480f0fa28dc286a09625cc1b1b910fc63f3bdfb4"} Feb 26 14:45:21 crc kubenswrapper[4809]: I0226 14:45:21.960108 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:22 crc kubenswrapper[4809]: I0226 14:45:22.267301 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:45:22 crc kubenswrapper[4809]: E0226 14:45:22.267706 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:45:22 crc kubenswrapper[4809]: I0226 14:45:22.881802 4809 generic.go:334] "Generic (PLEG): container finished" podID="52175667-e934-4c12-a6f0-a05c5006d789" containerID="f1dca72ed800b73ddf1050a01afeefeb5ba311539883d6f28019843f4fb822cc" exitCode=0 Feb 26 14:45:22 crc kubenswrapper[4809]: I0226 14:45:22.881974 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerDied","Data":"f1dca72ed800b73ddf1050a01afeefeb5ba311539883d6f28019843f4fb822cc"} Feb 26 14:45:26 crc kubenswrapper[4809]: I0226 14:45:26.235656 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" containerID="cri-o://977762f79837dcfb02c6d8f1c2230e194433f2dfa838e601df552c7e99fb77e3" gracePeriod=604795 Feb 26 14:45:26 crc kubenswrapper[4809]: I0226 14:45:26.730466 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" containerID="cri-o://f2b800ad84380177eaf55be9aca6cedd3dd84caabaf74ce9e66be19860e8706a" gracePeriod=604796 Feb 26 14:45:26 crc kubenswrapper[4809]: I0226 14:45:26.953270 4809 generic.go:334] "Generic (PLEG): container finished" podID="52175667-e934-4c12-a6f0-a05c5006d789" containerID="ba7b43be433fd6e67478691f66c675fec15bf7df7f640ebb331e6c8696a7d4c6" exitCode=0 Feb 26 14:45:26 crc kubenswrapper[4809]: I0226 14:45:26.953317 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerDied","Data":"ba7b43be433fd6e67478691f66c675fec15bf7df7f640ebb331e6c8696a7d4c6"} Feb 26 14:45:28 crc kubenswrapper[4809]: I0226 14:45:28.987987 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 26 14:45:29 crc kubenswrapper[4809]: I0226 14:45:29.293197 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.659698 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.764417 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.764824 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.764955 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.765954 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766120 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766222 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766181 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766463 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766584 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfzbv\" (UniqueName: \"kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv\") pod \"52175667-e934-4c12-a6f0-a05c5006d789\" (UID: \"52175667-e934-4c12-a6f0-a05c5006d789\") " Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.766768 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.767837 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.767979 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52175667-e934-4c12-a6f0-a05c5006d789-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.771983 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts" (OuterVolumeSpecName: "scripts") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.775815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv" (OuterVolumeSpecName: "kube-api-access-cfzbv") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "kube-api-access-cfzbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.802789 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.859485 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.870653 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.870686 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.870698 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.870708 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfzbv\" (UniqueName: \"kubernetes.io/projected/52175667-e934-4c12-a6f0-a05c5006d789-kube-api-access-cfzbv\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.900860 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.912842 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data" (OuterVolumeSpecName: "config-data") pod "52175667-e934-4c12-a6f0-a05c5006d789" (UID: "52175667-e934-4c12-a6f0-a05c5006d789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.973032 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:30 crc kubenswrapper[4809]: I0226 14:45:30.973078 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52175667-e934-4c12-a6f0-a05c5006d789-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.012682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52175667-e934-4c12-a6f0-a05c5006d789","Type":"ContainerDied","Data":"6461bbf50d643c93643748e4ce5ed11fa173d389fe3a09c54a9278a0be93d8d9"} Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.012750 4809 scope.go:117] "RemoveContainer" containerID="86aacd17c774ea510cb52539fb46f17afd6b95df3906db85f955867fa3443ae1" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.012887 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.065429 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.087425 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.104713 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:31 crc kubenswrapper[4809]: E0226 14:45:31.105400 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="proxy-httpd" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105426 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="proxy-httpd" Feb 26 14:45:31 crc kubenswrapper[4809]: E0226 14:45:31.105445 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-central-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105453 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-central-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: E0226 14:45:31.105488 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-notification-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105497 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-notification-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: E0226 14:45:31.105549 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="sg-core" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105558 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="sg-core" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105878 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="sg-core" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105915 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-central-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105938 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="proxy-httpd" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.105958 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52175667-e934-4c12-a6f0-a05c5006d789" containerName="ceilometer-notification-agent" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.108494 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.110887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.111090 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.111281 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.111875 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.179758 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.179928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-run-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180202 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-log-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180255 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-scripts\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180324 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180456 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180494 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-config-data\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.180551 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lvz\" (UniqueName: \"kubernetes.io/projected/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-kube-api-access-42lvz\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282319 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-log-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282401 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-scripts\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-config-data\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282609 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42lvz\" (UniqueName: \"kubernetes.io/projected/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-kube-api-access-42lvz\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282698 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282762 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-run-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.282864 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-log-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.283800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-run-httpd\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.287068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-scripts\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.287088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.288705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.288958 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.292399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-config-data\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.301414 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42lvz\" (UniqueName: \"kubernetes.io/projected/a80c453e-f839-4b12-acd5-c0e59ba4b2cc-kube-api-access-42lvz\") pod \"ceilometer-0\" (UID: \"a80c453e-f839-4b12-acd5-c0e59ba4b2cc\") " pod="openstack/ceilometer-0" Feb 26 14:45:31 crc kubenswrapper[4809]: I0226 14:45:31.434466 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 14:45:32 crc kubenswrapper[4809]: I0226 14:45:32.272512 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52175667-e934-4c12-a6f0-a05c5006d789" path="/var/lib/kubelet/pods/52175667-e934-4c12-a6f0-a05c5006d789/volumes" Feb 26 14:45:34 crc kubenswrapper[4809]: I0226 14:45:34.084573 4809 generic.go:334] "Generic (PLEG): container finished" podID="f375c9b0-076d-4c28-adde-74405cf866bc" containerID="977762f79837dcfb02c6d8f1c2230e194433f2dfa838e601df552c7e99fb77e3" exitCode=0 Feb 26 14:45:34 crc kubenswrapper[4809]: I0226 14:45:34.084975 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerDied","Data":"977762f79837dcfb02c6d8f1c2230e194433f2dfa838e601df552c7e99fb77e3"} Feb 26 14:45:34 crc kubenswrapper[4809]: I0226 14:45:34.089134 4809 generic.go:334] "Generic (PLEG): container finished" podID="94b1d0fc-c81e-40db-a043-fd5992788567" containerID="f2b800ad84380177eaf55be9aca6cedd3dd84caabaf74ce9e66be19860e8706a" exitCode=0 Feb 26 14:45:34 crc kubenswrapper[4809]: I0226 14:45:34.089181 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerDied","Data":"f2b800ad84380177eaf55be9aca6cedd3dd84caabaf74ce9e66be19860e8706a"} Feb 26 14:45:35 crc kubenswrapper[4809]: I0226 14:45:35.257254 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:45:35 crc kubenswrapper[4809]: E0226 14:45:35.257544 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:45:38 crc kubenswrapper[4809]: I0226 14:45:38.988625 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 26 14:45:39 crc kubenswrapper[4809]: I0226 14:45:39.292539 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.698053 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.700449 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.709664 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.726645 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.762376 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsml\" (UniqueName: \"kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.762713 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.762824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.762992 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.763100 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.763231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.763265 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866185 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866304 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866349 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866429 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsml\" (UniqueName: \"kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.866735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.867626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.867640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.867957 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.868293 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.868672 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.868985 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:40 crc kubenswrapper[4809]: I0226 14:45:40.892554 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsml\" (UniqueName: \"kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml\") pod \"dnsmasq-dns-68df85789f-v2b2x\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:41 crc kubenswrapper[4809]: I0226 14:45:41.049960 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.310002 4809 scope.go:117] "RemoveContainer" containerID="17e44fecce78001640b6db51480f0fa28dc286a09625cc1b1b910fc63f3bdfb4" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.394793 4809 scope.go:117] "RemoveContainer" containerID="ba7b43be433fd6e67478691f66c675fec15bf7df7f640ebb331e6c8696a7d4c6" Feb 26 14:45:42 crc kubenswrapper[4809]: E0226 14:45:42.427520 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 26 14:45:42 crc kubenswrapper[4809]: E0226 14:45:42.427927 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 26 14:45:42 crc kubenswrapper[4809]: E0226 14:45:42.428111 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-665n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-mnms7_openstack(b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:45:42 crc kubenswrapper[4809]: E0226 14:45:42.429653 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-mnms7" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.541141 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.567519 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.577420 4809 scope.go:117] "RemoveContainer" containerID="f1dca72ed800b73ddf1050a01afeefeb5ba311539883d6f28019843f4fb822cc" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.630202 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632070 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632152 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lz9\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632257 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632309 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632342 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632374 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632410 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.632440 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.636596 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640160 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640269 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640299 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.637184 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9" (OuterVolumeSpecName: "kube-api-access-r4lz9") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "kube-api-access-r4lz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640452 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640506 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640555 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640577 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640609 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640654 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640690 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hsg8\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640746 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf\") pod \"94b1d0fc-c81e-40db-a043-fd5992788567\" (UID: \"94b1d0fc-c81e-40db-a043-fd5992788567\") " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.640939 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.641394 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.642301 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lz9\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-kube-api-access-r4lz9\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.642330 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.642343 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.642941 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.643382 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.645614 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.647333 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.651034 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.659783 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.665483 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8" (OuterVolumeSpecName: "kube-api-access-9hsg8") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "kube-api-access-9hsg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.665597 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.665954 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info" (OuterVolumeSpecName: "pod-info") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.666277 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.671053 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info" (OuterVolumeSpecName: "pod-info") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.717188 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data" (OuterVolumeSpecName: "config-data") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.718170 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12" (OuterVolumeSpecName: "persistence") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: E0226 14:45:42.724740 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757 podName:f375c9b0-076d-4c28-adde-74405cf866bc nodeName:}" failed. No retries permitted until 2026-02-26 14:45:43.224714155 +0000 UTC m=+1921.698034678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745754 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") on node \"crc\" " Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745782 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f375c9b0-076d-4c28-adde-74405cf866bc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745792 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745801 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745810 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745819 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745828 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94b1d0fc-c81e-40db-a043-fd5992788567-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745837 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745845 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hsg8\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-kube-api-access-9hsg8\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745853 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745861 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94b1d0fc-c81e-40db-a043-fd5992788567-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745869 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.745876 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f375c9b0-076d-4c28-adde-74405cf866bc-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.786206 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data" (OuterVolumeSpecName: "config-data") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.791993 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf" (OuterVolumeSpecName: "server-conf") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.798694 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf" (OuterVolumeSpecName: "server-conf") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.825366 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "94b1d0fc-c81e-40db-a043-fd5992788567" (UID: "94b1d0fc-c81e-40db-a043-fd5992788567"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.832573 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.832721 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12") on node "crc" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.852704 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.852728 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94b1d0fc-c81e-40db-a043-fd5992788567-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.852740 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f375c9b0-076d-4c28-adde-74405cf866bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.852749 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.852759 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94b1d0fc-c81e-40db-a043-fd5992788567-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.881135 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.955354 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f375c9b0-076d-4c28-adde-74405cf866bc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:42 crc kubenswrapper[4809]: I0226 14:45:42.999495 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.071317 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 14:45:43 crc kubenswrapper[4809]: W0226 14:45:43.083594 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80c453e_f839_4b12_acd5_c0e59ba4b2cc.slice/crio-3626ef21f5e446e950c4f66ffe0d4a457327558145d5dfc7bc0ee4643de5f795 WatchSource:0}: Error finding container 3626ef21f5e446e950c4f66ffe0d4a457327558145d5dfc7bc0ee4643de5f795: Status 404 returned error can't find the container with id 3626ef21f5e446e950c4f66ffe0d4a457327558145d5dfc7bc0ee4643de5f795 Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.232172 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"3626ef21f5e446e950c4f66ffe0d4a457327558145d5dfc7bc0ee4643de5f795"} Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.238795 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f375c9b0-076d-4c28-adde-74405cf866bc","Type":"ContainerDied","Data":"b47c7c94a46fc59e09925d36dd8dd8002c87e7b8f5bc985b4396fc84a098e9e1"} Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.238837 4809 scope.go:117] "RemoveContainer" containerID="977762f79837dcfb02c6d8f1c2230e194433f2dfa838e601df552c7e99fb77e3" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.238951 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.242825 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" event={"ID":"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81","Type":"ContainerStarted","Data":"b276dc28f9e027069dd8c8361e9d18837391362196702259b2ea5883e2849ca1"} Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.245743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"94b1d0fc-c81e-40db-a043-fd5992788567","Type":"ContainerDied","Data":"ce5f1a6f7a265ff01991096032e94f8d423b028af22308b9e3b452fd3933581b"} Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.245795 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: E0226 14:45:43.249598 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-mnms7" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.266786 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"f375c9b0-076d-4c28-adde-74405cf866bc\" (UID: \"f375c9b0-076d-4c28-adde-74405cf866bc\") " Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.278704 4809 scope.go:117] "RemoveContainer" containerID="92e81cc7c063f704ca11a1f2d5e5c240fa2b9fed516b8e5beefe4d1a6fee7d42" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.295553 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757" (OuterVolumeSpecName: "persistence") pod "f375c9b0-076d-4c28-adde-74405cf866bc" (UID: "f375c9b0-076d-4c28-adde-74405cf866bc"). InnerVolumeSpecName "pvc-86e1c498-6209-4797-88f4-437040943757". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.339872 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.366735 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.372356 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") on node \"crc\" " Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.383304 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.392384 4809 scope.go:117] "RemoveContainer" containerID="f2b800ad84380177eaf55be9aca6cedd3dd84caabaf74ce9e66be19860e8706a" Feb 26 14:45:43 crc kubenswrapper[4809]: E0226 14:45:43.393203 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="setup-container" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393228 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="setup-container" Feb 26 14:45:43 crc kubenswrapper[4809]: E0226 14:45:43.393287 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="setup-container" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393294 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="setup-container" Feb 26 14:45:43 crc kubenswrapper[4809]: E0226 14:45:43.393326 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393333 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: E0226 14:45:43.393348 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393395 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393822 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.393875 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" containerName="rabbitmq" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.400923 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.401260 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.407491 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.407683 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ggb8f" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.407766 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.407848 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.407966 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.408245 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.408322 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.427157 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.427492 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-86e1c498-6209-4797-88f4-437040943757" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757") on node "crc" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.467655 4809 scope.go:117] "RemoveContainer" containerID="b5a794cb606575426cb262a59cb8e194a419febe2842acf21e046bbdc5123016" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475474 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475543 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475763 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475828 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vzwz\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-kube-api-access-9vzwz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475862 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fadb9f7-5f45-40bb-a288-8332be9f3c10-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475917 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.475940 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fadb9f7-5f45-40bb-a288-8332be9f3c10-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.476029 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.476062 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.476141 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.487611 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.529448 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.551210 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.559000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.569592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581335 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581370 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581392 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581426 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581450 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581512 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1b541d8-7c08-42e8-831b-6e3d7262277a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581586 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1b541d8-7c08-42e8-831b-6e3d7262277a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581609 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll4h\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-kube-api-access-tll4h\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581640 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581724 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vzwz\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-kube-api-access-9vzwz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581741 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581755 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fadb9f7-5f45-40bb-a288-8332be9f3c10-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581835 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581850 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fadb9f7-5f45-40bb-a288-8332be9f3c10-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581880 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.581902 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.582862 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.586224 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.586821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.587381 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.588565 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fadb9f7-5f45-40bb-a288-8332be9f3c10-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.592964 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fadb9f7-5f45-40bb-a288-8332be9f3c10-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.593353 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fadb9f7-5f45-40bb-a288-8332be9f3c10-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.593423 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.594692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.599094 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.599129 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a3438c8e38dcffb88c2bb9cce9738361e0ac40dc000de58df3e22d6950d7f0c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.620316 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vzwz\" (UniqueName: \"kubernetes.io/projected/7fadb9f7-5f45-40bb-a288-8332be9f3c10-kube-api-access-9vzwz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.682072 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39d35d27-d1dc-4426-a477-88ec2e42bd12\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fadb9f7-5f45-40bb-a288-8332be9f3c10\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.689348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.689661 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.689869 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.690258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.690585 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.690874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.691191 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1b541d8-7c08-42e8-831b-6e3d7262277a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.691994 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1b541d8-7c08-42e8-831b-6e3d7262277a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.692504 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.691482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-config-data\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.692514 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll4h\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-kube-api-access-tll4h\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.693168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.693811 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.694273 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.694433 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.695282 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1b541d8-7c08-42e8-831b-6e3d7262277a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.695939 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.697358 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.697896 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.697933 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bd655ee73774e45add2c059d6525cf05e0989d68eb1fadb6969bdbf604263d4/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.697958 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1b541d8-7c08-42e8-831b-6e3d7262277a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.699068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1b541d8-7c08-42e8-831b-6e3d7262277a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.711618 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll4h\" (UniqueName: \"kubernetes.io/projected/f1b541d8-7c08-42e8-831b-6e3d7262277a-kube-api-access-tll4h\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.768824 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:45:43 crc kubenswrapper[4809]: I0226 14:45:43.793245 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86e1c498-6209-4797-88f4-437040943757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86e1c498-6209-4797-88f4-437040943757\") pod \"rabbitmq-server-2\" (UID: \"f1b541d8-7c08-42e8-831b-6e3d7262277a\") " pod="openstack/rabbitmq-server-2" Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.092056 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 26 14:45:44 crc kubenswrapper[4809]: W0226 14:45:44.275466 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fadb9f7_5f45_40bb_a288_8332be9f3c10.slice/crio-4f2e14517b4b0030873a5c7190060b050f834b5f3496e860ad37142a97cdfaa3 WatchSource:0}: Error finding container 4f2e14517b4b0030873a5c7190060b050f834b5f3496e860ad37142a97cdfaa3: Status 404 returned error can't find the container with id 4f2e14517b4b0030873a5c7190060b050f834b5f3496e860ad37142a97cdfaa3 Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.275698 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerID="879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040" exitCode=0 Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.277526 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b1d0fc-c81e-40db-a043-fd5992788567" path="/var/lib/kubelet/pods/94b1d0fc-c81e-40db-a043-fd5992788567/volumes" Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.283745 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f375c9b0-076d-4c28-adde-74405cf866bc" path="/var/lib/kubelet/pods/f375c9b0-076d-4c28-adde-74405cf866bc/volumes" Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.285543 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.285574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" event={"ID":"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81","Type":"ContainerDied","Data":"879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040"} Feb 26 14:45:44 crc kubenswrapper[4809]: I0226 14:45:44.594253 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 26 14:45:44 crc kubenswrapper[4809]: W0226 14:45:44.597212 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1b541d8_7c08_42e8_831b_6e3d7262277a.slice/crio-5cd4decd856171884ab3e9f84174fa1f51b26a001ddeb3147f8f4cfdf9de4867 WatchSource:0}: Error finding container 5cd4decd856171884ab3e9f84174fa1f51b26a001ddeb3147f8f4cfdf9de4867: Status 404 returned error can't find the container with id 5cd4decd856171884ab3e9f84174fa1f51b26a001ddeb3147f8f4cfdf9de4867 Feb 26 14:45:45 crc kubenswrapper[4809]: I0226 14:45:45.295899 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f1b541d8-7c08-42e8-831b-6e3d7262277a","Type":"ContainerStarted","Data":"5cd4decd856171884ab3e9f84174fa1f51b26a001ddeb3147f8f4cfdf9de4867"} Feb 26 14:45:45 crc kubenswrapper[4809]: I0226 14:45:45.303495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" event={"ID":"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81","Type":"ContainerStarted","Data":"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e"} Feb 26 14:45:45 crc kubenswrapper[4809]: I0226 14:45:45.306095 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:45 crc kubenswrapper[4809]: I0226 14:45:45.312496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fadb9f7-5f45-40bb-a288-8332be9f3c10","Type":"ContainerStarted","Data":"4f2e14517b4b0030873a5c7190060b050f834b5f3496e860ad37142a97cdfaa3"} Feb 26 14:45:45 crc kubenswrapper[4809]: I0226 14:45:45.330291 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" podStartSLOduration=5.330275144 podStartE2EDuration="5.330275144s" podCreationTimestamp="2026-02-26 14:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:45:45.328956296 +0000 UTC m=+1923.802276829" watchObservedRunningTime="2026-02-26 14:45:45.330275144 +0000 UTC m=+1923.803595667" Feb 26 14:45:46 crc kubenswrapper[4809]: I0226 14:45:46.326461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fadb9f7-5f45-40bb-a288-8332be9f3c10","Type":"ContainerStarted","Data":"399ec393cbf606873b6b85523bed66650d7b248c8393100d76d88409686c265f"} Feb 26 14:45:49 crc kubenswrapper[4809]: I0226 14:45:49.258134 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:45:49 crc kubenswrapper[4809]: E0226 14:45:49.258813 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:45:49 crc kubenswrapper[4809]: I0226 14:45:49.372995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f1b541d8-7c08-42e8-831b-6e3d7262277a","Type":"ContainerStarted","Data":"1e419141af78b78e9b6ca511111b00a5cd91b030e479f8b05c3d0132af604544"} Feb 26 14:45:49 crc kubenswrapper[4809]: I0226 14:45:49.376693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"6f9457216ee1bd106526d009afe3395c0b8603ee1861cacb9352f53c4f9eed5a"} Feb 26 14:45:50 crc kubenswrapper[4809]: I0226 14:45:50.409125 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"04f244bed134ebfe03022b3e331d69a81cc7766f8d998d0543c4d857dd80180b"} Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.052194 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.128762 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.129060 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="dnsmasq-dns" containerID="cri-o://09673654fe4d9c56211f5d9e626e6b0613124d76acf90290eecd49efee6aacf5" gracePeriod=10 Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.341075 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-mp2sl"] Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.344317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.406412 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-mp2sl"] Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.446298 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"616890c310b33386f6c4fb8b548eca70f6f1c8aae2ef677f87287c679ab6aca9"} Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523386 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523580 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7sz\" (UniqueName: \"kubernetes.io/projected/889cb62e-7001-42d1-9e5f-afe69fb0fea0-kube-api-access-4t7sz\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523733 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-config\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523777 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-svc\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.523812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.627730 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-config\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.627798 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-svc\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.627842 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.627930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.628095 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.628125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7sz\" (UniqueName: \"kubernetes.io/projected/889cb62e-7001-42d1-9e5f-afe69fb0fea0-kube-api-access-4t7sz\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.628201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.628956 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.628999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.629157 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.629442 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.629472 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-config\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.630125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/889cb62e-7001-42d1-9e5f-afe69fb0fea0-dns-svc\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.661229 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7sz\" (UniqueName: \"kubernetes.io/projected/889cb62e-7001-42d1-9e5f-afe69fb0fea0-kube-api-access-4t7sz\") pod \"dnsmasq-dns-bb85b8995-mp2sl\" (UID: \"889cb62e-7001-42d1-9e5f-afe69fb0fea0\") " pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.671676 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.5:5353: connect: connection refused" Feb 26 14:45:51 crc kubenswrapper[4809]: I0226 14:45:51.677448 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.164180 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-mp2sl"] Feb 26 14:45:52 crc kubenswrapper[4809]: W0226 14:45:52.165606 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod889cb62e_7001_42d1_9e5f_afe69fb0fea0.slice/crio-29b785ea016eda6e4fefdc1aab4dc87a4d2267a4a6d822ca4aa40c803610c1ae WatchSource:0}: Error finding container 29b785ea016eda6e4fefdc1aab4dc87a4d2267a4a6d822ca4aa40c803610c1ae: Status 404 returned error can't find the container with id 29b785ea016eda6e4fefdc1aab4dc87a4d2267a4a6d822ca4aa40c803610c1ae Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.564905 4809 generic.go:334] "Generic (PLEG): container finished" podID="d162503c-e431-4c79-9c71-f96f5b981f45" containerID="09673654fe4d9c56211f5d9e626e6b0613124d76acf90290eecd49efee6aacf5" exitCode=0 Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.565931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" event={"ID":"d162503c-e431-4c79-9c71-f96f5b981f45","Type":"ContainerDied","Data":"09673654fe4d9c56211f5d9e626e6b0613124d76acf90290eecd49efee6aacf5"} Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.569162 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" event={"ID":"889cb62e-7001-42d1-9e5f-afe69fb0fea0","Type":"ContainerStarted","Data":"29b785ea016eda6e4fefdc1aab4dc87a4d2267a4a6d822ca4aa40c803610c1ae"} Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.599137 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.689586 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.689656 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.689697 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkwz\" (UniqueName: \"kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.689790 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.689843 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.690024 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc\") pod \"d162503c-e431-4c79-9c71-f96f5b981f45\" (UID: \"d162503c-e431-4c79-9c71-f96f5b981f45\") " Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.705256 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz" (OuterVolumeSpecName: "kube-api-access-mkkwz") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "kube-api-access-mkkwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.779063 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.780818 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config" (OuterVolumeSpecName: "config") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.781453 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.793091 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.793136 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.793149 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.793158 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkwz\" (UniqueName: \"kubernetes.io/projected/d162503c-e431-4c79-9c71-f96f5b981f45-kube-api-access-mkkwz\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.796195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.800623 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d162503c-e431-4c79-9c71-f96f5b981f45" (UID: "d162503c-e431-4c79-9c71-f96f5b981f45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.895563 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:52 crc kubenswrapper[4809]: I0226 14:45:52.895615 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d162503c-e431-4c79-9c71-f96f5b981f45-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.581672 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" event={"ID":"d162503c-e431-4c79-9c71-f96f5b981f45","Type":"ContainerDied","Data":"d4a46584e15fb7161349e9a3a05b85e0ccfe4cef9f062c317bae933c39a1de08"} Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.581720 4809 scope.go:117] "RemoveContainer" containerID="09673654fe4d9c56211f5d9e626e6b0613124d76acf90290eecd49efee6aacf5" Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.581841 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-g78k4" Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.588472 4809 generic.go:334] "Generic (PLEG): container finished" podID="889cb62e-7001-42d1-9e5f-afe69fb0fea0" containerID="711dbbde74af1dc84d380cded154449e1eacc71a9158f87bdfc929fed83121aa" exitCode=0 Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.588632 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" event={"ID":"889cb62e-7001-42d1-9e5f-afe69fb0fea0","Type":"ContainerDied","Data":"711dbbde74af1dc84d380cded154449e1eacc71a9158f87bdfc929fed83121aa"} Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.655462 4809 scope.go:117] "RemoveContainer" containerID="b341e7d88c926e644044a37ead9b07c7ea6522f7155fb5f215bfccfa0884f481" Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.666164 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:45:53 crc kubenswrapper[4809]: I0226 14:45:53.683734 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-g78k4"] Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.271676 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" path="/var/lib/kubelet/pods/d162503c-e431-4c79-9c71-f96f5b981f45/volumes" Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.602283 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"510524561ab0a278955a0ed98336ea5a61c39c4078218ff0c58f700c0dc703a5"} Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.602597 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.604447 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" event={"ID":"889cb62e-7001-42d1-9e5f-afe69fb0fea0","Type":"ContainerStarted","Data":"fc053d787a7cadc140dde97d983ae4501de5fbc87c40761498e33a0c4af4c838"} Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.604597 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.629562 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.450419782 podStartE2EDuration="23.629536906s" podCreationTimestamp="2026-02-26 14:45:31 +0000 UTC" firstStartedPulling="2026-02-26 14:45:43.086089031 +0000 UTC m=+1921.559409554" lastFinishedPulling="2026-02-26 14:45:54.265206155 +0000 UTC m=+1932.738526678" observedRunningTime="2026-02-26 14:45:54.621883816 +0000 UTC m=+1933.095204339" watchObservedRunningTime="2026-02-26 14:45:54.629536906 +0000 UTC m=+1933.102857439" Feb 26 14:45:54 crc kubenswrapper[4809]: I0226 14:45:54.654543 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" podStartSLOduration=3.654524742 podStartE2EDuration="3.654524742s" podCreationTimestamp="2026-02-26 14:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:45:54.648401516 +0000 UTC m=+1933.121722039" watchObservedRunningTime="2026-02-26 14:45:54.654524742 +0000 UTC m=+1933.127845285" Feb 26 14:45:59 crc kubenswrapper[4809]: I0226 14:45:59.686650 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mnms7" event={"ID":"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd","Type":"ContainerStarted","Data":"4bfce04cf3f0603489992f7c0230f510181e5ef797f594767ad222cbf44927aa"} Feb 26 14:45:59 crc kubenswrapper[4809]: I0226 14:45:59.717993 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-mnms7" podStartSLOduration=2.042740721 podStartE2EDuration="41.717969827s" podCreationTimestamp="2026-02-26 14:45:18 +0000 UTC" firstStartedPulling="2026-02-26 14:45:19.685531214 +0000 UTC m=+1898.158851737" lastFinishedPulling="2026-02-26 14:45:59.36076032 +0000 UTC m=+1937.834080843" observedRunningTime="2026-02-26 14:45:59.703810121 +0000 UTC m=+1938.177130654" watchObservedRunningTime="2026-02-26 14:45:59.717969827 +0000 UTC m=+1938.191290360" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.165291 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535286-gswpg"] Feb 26 14:46:00 crc kubenswrapper[4809]: E0226 14:46:00.166185 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="init" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.166213 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="init" Feb 26 14:46:00 crc kubenswrapper[4809]: E0226 14:46:00.166307 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="dnsmasq-dns" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.166322 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="dnsmasq-dns" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.166682 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d162503c-e431-4c79-9c71-f96f5b981f45" containerName="dnsmasq-dns" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.167917 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.170813 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.171474 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.171567 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.220153 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhqg6\" (UniqueName: \"kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6\") pod \"auto-csr-approver-29535286-gswpg\" (UID: \"345b13de-06f8-47c7-a9e4-e18fa30835a3\") " pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.232444 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-gswpg"] Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.257751 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:46:00 crc kubenswrapper[4809]: E0226 14:46:00.258211 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.323028 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhqg6\" (UniqueName: \"kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6\") pod \"auto-csr-approver-29535286-gswpg\" (UID: \"345b13de-06f8-47c7-a9e4-e18fa30835a3\") " pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.345357 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhqg6\" (UniqueName: \"kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6\") pod \"auto-csr-approver-29535286-gswpg\" (UID: \"345b13de-06f8-47c7-a9e4-e18fa30835a3\") " pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:00 crc kubenswrapper[4809]: I0226 14:46:00.517714 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:01 crc kubenswrapper[4809]: W0226 14:46:01.022578 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod345b13de_06f8_47c7_a9e4_e18fa30835a3.slice/crio-7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a WatchSource:0}: Error finding container 7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a: Status 404 returned error can't find the container with id 7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a Feb 26 14:46:01 crc kubenswrapper[4809]: I0226 14:46:01.025329 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-gswpg"] Feb 26 14:46:01 crc kubenswrapper[4809]: I0226 14:46:01.679186 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-mp2sl" Feb 26 14:46:01 crc kubenswrapper[4809]: I0226 14:46:01.718061 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-gswpg" event={"ID":"345b13de-06f8-47c7-a9e4-e18fa30835a3","Type":"ContainerStarted","Data":"7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a"} Feb 26 14:46:01 crc kubenswrapper[4809]: I0226 14:46:01.775905 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:46:01 crc kubenswrapper[4809]: I0226 14:46:01.776206 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="dnsmasq-dns" containerID="cri-o://dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e" gracePeriod=10 Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.516664 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587450 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587630 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587682 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmsml\" (UniqueName: \"kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587814 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.587902 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0\") pod \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\" (UID: \"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81\") " Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.631487 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml" (OuterVolumeSpecName: "kube-api-access-kmsml") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "kube-api-access-kmsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.681072 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.682570 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config" (OuterVolumeSpecName: "config") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.698312 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.698355 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-config\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.698369 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmsml\" (UniqueName: \"kubernetes.io/projected/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-kube-api-access-kmsml\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.699998 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.707754 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.709368 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.710871 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" (UID: "0a5058ed-de2e-4cf8-9130-99d9cfe0ba81"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.732877 4809 generic.go:334] "Generic (PLEG): container finished" podID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" containerID="4bfce04cf3f0603489992f7c0230f510181e5ef797f594767ad222cbf44927aa" exitCode=0 Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.733228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mnms7" event={"ID":"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd","Type":"ContainerDied","Data":"4bfce04cf3f0603489992f7c0230f510181e5ef797f594767ad222cbf44927aa"} Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.737136 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerID="dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e" exitCode=0 Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.737185 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" event={"ID":"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81","Type":"ContainerDied","Data":"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e"} Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.737214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" event={"ID":"0a5058ed-de2e-4cf8-9130-99d9cfe0ba81","Type":"ContainerDied","Data":"b276dc28f9e027069dd8c8361e9d18837391362196702259b2ea5883e2849ca1"} Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.737230 4809 scope.go:117] "RemoveContainer" containerID="dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.737539 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-v2b2x" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.780191 4809 scope.go:117] "RemoveContainer" containerID="879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.790679 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.800381 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.800421 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.800435 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.800446 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.801833 4809 scope.go:117] "RemoveContainer" containerID="dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.801977 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-v2b2x"] Feb 26 14:46:02 crc kubenswrapper[4809]: E0226 14:46:02.802187 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e\": container with ID starting with dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e not found: ID does not exist" containerID="dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.802228 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e"} err="failed to get container status \"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e\": rpc error: code = NotFound desc = could not find container \"dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e\": container with ID starting with dd95ce8e39455f4b0b85cac64958405d06ac5239592c19fbdb3c87eb3f1be10e not found: ID does not exist" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.802256 4809 scope.go:117] "RemoveContainer" containerID="879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040" Feb 26 14:46:02 crc kubenswrapper[4809]: E0226 14:46:02.802595 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040\": container with ID starting with 879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040 not found: ID does not exist" containerID="879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040" Feb 26 14:46:02 crc kubenswrapper[4809]: I0226 14:46:02.802631 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040"} err="failed to get container status \"879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040\": rpc error: code = NotFound desc = could not find container \"879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040\": container with ID starting with 879f40cb570cd0d00de19f0b30b881197f530f276bb9912f2cf000247de77040 not found: ID does not exist" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.211572 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mnms7" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.251908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle\") pod \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.252005 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-665n8\" (UniqueName: \"kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8\") pod \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.252121 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data\") pod \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\" (UID: \"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd\") " Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.286646 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8" (OuterVolumeSpecName: "kube-api-access-665n8") pod "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" (UID: "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd"). InnerVolumeSpecName "kube-api-access-665n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.301742 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" path="/var/lib/kubelet/pods/0a5058ed-de2e-4cf8-9130-99d9cfe0ba81/volumes" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.307619 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" (UID: "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.360377 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-665n8\" (UniqueName: \"kubernetes.io/projected/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-kube-api-access-665n8\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.360609 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.367566 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data" (OuterVolumeSpecName: "config-data") pod "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" (UID: "b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.463140 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.770757 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mnms7" event={"ID":"b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd","Type":"ContainerDied","Data":"41a38271df404556e54e3ed6a8b7be7da89a25a6621d1a9a8d411656774945ee"} Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.770814 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a38271df404556e54e3ed6a8b7be7da89a25a6621d1a9a8d411656774945ee" Feb 26 14:46:04 crc kubenswrapper[4809]: I0226 14:46:04.770946 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mnms7" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.827333 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-79989866bd-79zhg"] Feb 26 14:46:05 crc kubenswrapper[4809]: E0226 14:46:05.828030 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="dnsmasq-dns" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.828046 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="dnsmasq-dns" Feb 26 14:46:05 crc kubenswrapper[4809]: E0226 14:46:05.828092 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="init" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.828100 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="init" Feb 26 14:46:05 crc kubenswrapper[4809]: E0226 14:46:05.828124 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" containerName="heat-db-sync" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.828131 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" containerName="heat-db-sync" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.828406 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" containerName="heat-db-sync" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.828439 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5058ed-de2e-4cf8-9130-99d9cfe0ba81" containerName="dnsmasq-dns" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.829443 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.853446 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79989866bd-79zhg"] Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.877896 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-685c45777-gq64z"] Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.879692 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.901185 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-685c45777-gq64z"] Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904354 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll98c\" (UniqueName: \"kubernetes.io/projected/078913dd-883e-474c-bf17-8a5b75aaf507-kube-api-access-ll98c\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-combined-ca-bundle\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904417 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data-custom\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904434 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904518 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data-custom\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904560 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904592 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-public-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904671 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-combined-ca-bundle\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-internal-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.904733 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9vr\" (UniqueName: \"kubernetes.io/projected/e4f61962-0554-496c-9a5f-da2ed271ddd8-kube-api-access-jb9vr\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.940070 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f7b855898-fb2p7"] Feb 26 14:46:05 crc kubenswrapper[4809]: I0226 14:46:05.954419 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.011854 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data-custom\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012720 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn66\" (UniqueName: \"kubernetes.io/projected/87e5ac37-643a-4fb7-8dab-d40645ac9dca-kube-api-access-5kn66\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012747 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-public-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-combined-ca-bundle\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012958 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-combined-ca-bundle\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.012997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-internal-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013077 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data-custom\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013104 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9vr\" (UniqueName: \"kubernetes.io/projected/e4f61962-0554-496c-9a5f-da2ed271ddd8-kube-api-access-jb9vr\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-internal-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013329 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-public-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll98c\" (UniqueName: \"kubernetes.io/projected/078913dd-883e-474c-bf17-8a5b75aaf507-kube-api-access-ll98c\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-combined-ca-bundle\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013420 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data-custom\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.013527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.017764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-internal-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.017823 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-combined-ca-bundle\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.017951 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-public-tls-certs\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.019382 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data-custom\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.021330 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f7b855898-fb2p7"] Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.021619 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-combined-ca-bundle\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.026603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data-custom\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.028667 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078913dd-883e-474c-bf17-8a5b75aaf507-config-data\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.032252 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f61962-0554-496c-9a5f-da2ed271ddd8-config-data\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.033028 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9vr\" (UniqueName: \"kubernetes.io/projected/e4f61962-0554-496c-9a5f-da2ed271ddd8-kube-api-access-jb9vr\") pod \"heat-api-685c45777-gq64z\" (UID: \"e4f61962-0554-496c-9a5f-da2ed271ddd8\") " pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.036024 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll98c\" (UniqueName: \"kubernetes.io/projected/078913dd-883e-474c-bf17-8a5b75aaf507-kube-api-access-ll98c\") pod \"heat-engine-79989866bd-79zhg\" (UID: \"078913dd-883e-474c-bf17-8a5b75aaf507\") " pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.114605 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn66\" (UniqueName: \"kubernetes.io/projected/87e5ac37-643a-4fb7-8dab-d40645ac9dca-kube-api-access-5kn66\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.114652 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.114680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-combined-ca-bundle\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.115436 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data-custom\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.115474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-internal-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.115550 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-public-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.119443 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-internal-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.120694 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-public-tls-certs\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.121601 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.121936 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-combined-ca-bundle\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.126377 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e5ac37-643a-4fb7-8dab-d40645ac9dca-config-data-custom\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: E0226 14:46:06.131158 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest latest in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 14:46:06 crc kubenswrapper[4809]: E0226 14:46:06.131347 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 14:46:06 crc kubenswrapper[4809]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 14:46:06 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhqg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535286-gswpg_openshift-infra(345b13de-06f8-47c7-a9e4-e18fa30835a3): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest latest in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway Feb 26 14:46:06 crc kubenswrapper[4809]: > logger="UnhandledError" Feb 26 14:46:06 crc kubenswrapper[4809]: E0226 14:46:06.132773 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest latest in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-infra/auto-csr-approver-29535286-gswpg" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.134542 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn66\" (UniqueName: \"kubernetes.io/projected/87e5ac37-643a-4fb7-8dab-d40645ac9dca-kube-api-access-5kn66\") pod \"heat-cfnapi-6f7b855898-fb2p7\" (UID: \"87e5ac37-643a-4fb7-8dab-d40645ac9dca\") " pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.150000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.215683 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.297787 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.676195 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79989866bd-79zhg"] Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.788093 4809 scope.go:117] "RemoveContainer" containerID="4e3344f1a50d3b4df286abd52a5ffc94d18033e941a513014a78555520ebbf12" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.829897 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79989866bd-79zhg" event={"ID":"078913dd-883e-474c-bf17-8a5b75aaf507","Type":"ContainerStarted","Data":"0b4407439af0aac16e76b12048260606c352116de2839dd98c283b087c3542d5"} Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.843516 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f7b855898-fb2p7"] Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.848134 4809 scope.go:117] "RemoveContainer" containerID="062b0915cb5928792785fca79342ace0567a8c18187b6b6faaf8a20741ed4e1e" Feb 26 14:46:06 crc kubenswrapper[4809]: E0226 14:46:06.855499 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535286-gswpg" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.865292 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-685c45777-gq64z"] Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.897828 4809 scope.go:117] "RemoveContainer" containerID="618904a039a8a8b2f4745c4e212aa5556c71173015640daf854ca9e64b9a9ea6" Feb 26 14:46:06 crc kubenswrapper[4809]: I0226 14:46:06.929290 4809 scope.go:117] "RemoveContainer" containerID="43e9f84d9a5f8a7da1aadc798f85d0198a07606d11436546c47208d39f37b263" Feb 26 14:46:07 crc kubenswrapper[4809]: I0226 14:46:07.846389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" event={"ID":"87e5ac37-643a-4fb7-8dab-d40645ac9dca","Type":"ContainerStarted","Data":"7f8030457c136781efb6f14d887dbee392b5ab9b84cbdbf12d55314446eff6ae"} Feb 26 14:46:07 crc kubenswrapper[4809]: I0226 14:46:07.849557 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79989866bd-79zhg" event={"ID":"078913dd-883e-474c-bf17-8a5b75aaf507","Type":"ContainerStarted","Data":"46d48109f470c9a641837284218ce2269864220f6553c73fd139f56d370548c6"} Feb 26 14:46:07 crc kubenswrapper[4809]: I0226 14:46:07.850249 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:07 crc kubenswrapper[4809]: I0226 14:46:07.853220 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-685c45777-gq64z" event={"ID":"e4f61962-0554-496c-9a5f-da2ed271ddd8","Type":"ContainerStarted","Data":"f5a31ac190c226c2617a7c9343680101e8043751d1aed49cd454f1cc7ce33478"} Feb 26 14:46:07 crc kubenswrapper[4809]: I0226 14:46:07.870560 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-79989866bd-79zhg" podStartSLOduration=2.870529897 podStartE2EDuration="2.870529897s" podCreationTimestamp="2026-02-26 14:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:46:07.868224541 +0000 UTC m=+1946.341545064" watchObservedRunningTime="2026-02-26 14:46:07.870529897 +0000 UTC m=+1946.343850420" Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.233342 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5cf69889d9-nqp5q" podUID="dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.873782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-685c45777-gq64z" event={"ID":"e4f61962-0554-496c-9a5f-da2ed271ddd8","Type":"ContainerStarted","Data":"a5decd525a0d33447710b1b85cd3429503cd18a68bb4456c06398b1fa5ddebf8"} Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.875459 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.876624 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" event={"ID":"87e5ac37-643a-4fb7-8dab-d40645ac9dca","Type":"ContainerStarted","Data":"08b8c6059dfb263121c1e899038e25d2c8d049764ed02bf61fb687131a529648"} Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.876787 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.898642 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-685c45777-gq64z" podStartSLOduration=2.725421919 podStartE2EDuration="4.898625457s" podCreationTimestamp="2026-02-26 14:46:05 +0000 UTC" firstStartedPulling="2026-02-26 14:46:06.897375519 +0000 UTC m=+1945.370696042" lastFinishedPulling="2026-02-26 14:46:09.070579037 +0000 UTC m=+1947.543899580" observedRunningTime="2026-02-26 14:46:09.892225183 +0000 UTC m=+1948.365545706" watchObservedRunningTime="2026-02-26 14:46:09.898625457 +0000 UTC m=+1948.371945980" Feb 26 14:46:09 crc kubenswrapper[4809]: I0226 14:46:09.918782 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" podStartSLOduration=2.746047441 podStartE2EDuration="4.918764474s" podCreationTimestamp="2026-02-26 14:46:05 +0000 UTC" firstStartedPulling="2026-02-26 14:46:06.897356699 +0000 UTC m=+1945.370677222" lastFinishedPulling="2026-02-26 14:46:09.070073712 +0000 UTC m=+1947.543394255" observedRunningTime="2026-02-26 14:46:09.906638656 +0000 UTC m=+1948.379959179" watchObservedRunningTime="2026-02-26 14:46:09.918764474 +0000 UTC m=+1948.392084997" Feb 26 14:46:15 crc kubenswrapper[4809]: I0226 14:46:15.257926 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:46:15 crc kubenswrapper[4809]: E0226 14:46:15.259078 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.197702 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-79989866bd-79zhg" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.255828 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.256281 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5cdd964fc5-s4bsx" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" containerID="cri-o://5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" gracePeriod=60 Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.343079 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm"] Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.345644 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.348029 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.348923 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.349107 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.349284 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.359340 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm"] Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.497851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.498054 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.498418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhnb\" (UniqueName: \"kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.498625 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.601816 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jhnb\" (UniqueName: \"kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.601924 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.602060 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.602168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.608841 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.613438 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.613791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.621482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jhnb\" (UniqueName: \"kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:16 crc kubenswrapper[4809]: I0226 14:46:16.667566 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:17 crc kubenswrapper[4809]: I0226 14:46:17.389095 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm"] Feb 26 14:46:17 crc kubenswrapper[4809]: I0226 14:46:17.907286 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f7b855898-fb2p7" Feb 26 14:46:17 crc kubenswrapper[4809]: I0226 14:46:17.979977 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:46:17 crc kubenswrapper[4809]: I0226 14:46:17.980426 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" containerName="heat-cfnapi" containerID="cri-o://4146d83c51e3980648ffe05bce0df5553fe95e0d540c73b68ed51d17402dd07e" gracePeriod=60 Feb 26 14:46:17 crc kubenswrapper[4809]: I0226 14:46:17.984348 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" event={"ID":"b26ec76a-b3e0-4564-a225-0f7fe176f3e4","Type":"ContainerStarted","Data":"25e25b74a42ed3237ebf87ecc06a97e593c128732268877af8b873945369863d"} Feb 26 14:46:18 crc kubenswrapper[4809]: E0226 14:46:18.112725 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:18 crc kubenswrapper[4809]: E0226 14:46:18.115539 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:18 crc kubenswrapper[4809]: E0226 14:46:18.117816 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:18 crc kubenswrapper[4809]: E0226 14:46:18.117892 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5cdd964fc5-s4bsx" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" Feb 26 14:46:18 crc kubenswrapper[4809]: I0226 14:46:18.210989 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-685c45777-gq64z" Feb 26 14:46:18 crc kubenswrapper[4809]: I0226 14:46:18.295240 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:46:18 crc kubenswrapper[4809]: I0226 14:46:18.295510 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5b8c5684b6-nfc98" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerName="heat-api" containerID="cri-o://c11dd4f1fafd681e716d3e90db1f66e79d9958e4daa1c1d5c257a66f7645782c" gracePeriod=60 Feb 26 14:46:18 crc kubenswrapper[4809]: I0226 14:46:18.872001 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-gjsb5"] Feb 26 14:46:18 crc kubenswrapper[4809]: I0226 14:46:18.887858 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-gjsb5"] Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.012488 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-nww5r"] Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.015381 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.020124 4809 generic.go:334] "Generic (PLEG): container finished" podID="7fadb9f7-5f45-40bb-a288-8332be9f3c10" containerID="399ec393cbf606873b6b85523bed66650d7b248c8393100d76d88409686c265f" exitCode=0 Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.022884 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.020171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fadb9f7-5f45-40bb-a288-8332be9f3c10","Type":"ContainerDied","Data":"399ec393cbf606873b6b85523bed66650d7b248c8393100d76d88409686c265f"} Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.057518 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nww5r"] Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.167472 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.167574 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf58c\" (UniqueName: \"kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.168392 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.168546 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.271275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.271359 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.271462 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf58c\" (UniqueName: \"kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.271668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.278092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.278227 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.284053 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.299544 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf58c\" (UniqueName: \"kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c\") pod \"aodh-db-sync-nww5r\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.351351 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:19 crc kubenswrapper[4809]: I0226 14:46:19.879477 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nww5r"] Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.040230 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nww5r" event={"ID":"9b8e6711-9d3f-4961-84c5-defbf691d665","Type":"ContainerStarted","Data":"22b1df558ff9a5df420e694906a8be2f3a02408f0e6c3db971ec34ce2849353b"} Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.041732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-gswpg" event={"ID":"345b13de-06f8-47c7-a9e4-e18fa30835a3","Type":"ContainerStarted","Data":"ef32fd0e816063c79286192f6bf6c6a22a5ac6e0afd1cdac59d27cdd89ea584a"} Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.045545 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fadb9f7-5f45-40bb-a288-8332be9f3c10","Type":"ContainerStarted","Data":"bcd0dadd4311eef7d79eb2b24032fb3942b7f6d83fbb4a4fc5a9f731709bc36c"} Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.046194 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.072748 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535286-gswpg" podStartSLOduration=2.018181944 podStartE2EDuration="20.072725489s" podCreationTimestamp="2026-02-26 14:46:00 +0000 UTC" firstStartedPulling="2026-02-26 14:46:01.027062431 +0000 UTC m=+1939.500382954" lastFinishedPulling="2026-02-26 14:46:19.081605976 +0000 UTC m=+1957.554926499" observedRunningTime="2026-02-26 14:46:20.057458341 +0000 UTC m=+1958.530778884" watchObservedRunningTime="2026-02-26 14:46:20.072725489 +0000 UTC m=+1958.546046022" Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.090031 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.089997034 podStartE2EDuration="37.089997034s" podCreationTimestamp="2026-02-26 14:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:46:20.087509802 +0000 UTC m=+1958.560830335" watchObservedRunningTime="2026-02-26 14:46:20.089997034 +0000 UTC m=+1958.563317557" Feb 26 14:46:20 crc kubenswrapper[4809]: I0226 14:46:20.270431 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e09189b-a91c-4014-b92b-d8f6bdbd7846" path="/var/lib/kubelet/pods/8e09189b-a91c-4014-b92b-d8f6bdbd7846/volumes" Feb 26 14:46:21 crc kubenswrapper[4809]: I0226 14:46:21.058304 4809 generic.go:334] "Generic (PLEG): container finished" podID="345b13de-06f8-47c7-a9e4-e18fa30835a3" containerID="ef32fd0e816063c79286192f6bf6c6a22a5ac6e0afd1cdac59d27cdd89ea584a" exitCode=0 Feb 26 14:46:21 crc kubenswrapper[4809]: I0226 14:46:21.058393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-gswpg" event={"ID":"345b13de-06f8-47c7-a9e4-e18fa30835a3","Type":"ContainerDied","Data":"ef32fd0e816063c79286192f6bf6c6a22a5ac6e0afd1cdac59d27cdd89ea584a"} Feb 26 14:46:21 crc kubenswrapper[4809]: I0226 14:46:21.148748 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.223:8000/healthcheck\": read tcp 10.217.0.2:53432->10.217.0.223:8000: read: connection reset by peer" Feb 26 14:46:21 crc kubenswrapper[4809]: I0226 14:46:21.469304 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5b8c5684b6-nfc98" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.224:8004/healthcheck\": read tcp 10.217.0.2:58244->10.217.0.224:8004: read: connection reset by peer" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.079086 4809 generic.go:334] "Generic (PLEG): container finished" podID="f1b541d8-7c08-42e8-831b-6e3d7262277a" containerID="1e419141af78b78e9b6ca511111b00a5cd91b030e479f8b05c3d0132af604544" exitCode=0 Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.079142 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f1b541d8-7c08-42e8-831b-6e3d7262277a","Type":"ContainerDied","Data":"1e419141af78b78e9b6ca511111b00a5cd91b030e479f8b05c3d0132af604544"} Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.086836 4809 generic.go:334] "Generic (PLEG): container finished" podID="d869783f-f6de-42fc-8e42-a628d4b11262" containerID="4146d83c51e3980648ffe05bce0df5553fe95e0d540c73b68ed51d17402dd07e" exitCode=0 Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.086890 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" event={"ID":"d869783f-f6de-42fc-8e42-a628d4b11262","Type":"ContainerDied","Data":"4146d83c51e3980648ffe05bce0df5553fe95e0d540c73b68ed51d17402dd07e"} Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.096423 4809 generic.go:334] "Generic (PLEG): container finished" podID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerID="c11dd4f1fafd681e716d3e90db1f66e79d9958e4daa1c1d5c257a66f7645782c" exitCode=0 Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.096573 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b8c5684b6-nfc98" event={"ID":"572ca251-6227-4c68-a2dc-b1a0161eb9d6","Type":"ContainerDied","Data":"c11dd4f1fafd681e716d3e90db1f66e79d9958e4daa1c1d5c257a66f7645782c"} Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.583698 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.703905 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.704081 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.704172 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.704243 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.704307 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.704382 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6469j\" (UniqueName: \"kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j\") pod \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\" (UID: \"572ca251-6227-4c68-a2dc-b1a0161eb9d6\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.712060 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.714668 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j" (OuterVolumeSpecName: "kube-api-access-6469j") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "kube-api-access-6469j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.778581 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.787210 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.808696 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.808862 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.808924 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.808938 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6469j\" (UniqueName: \"kubernetes.io/projected/572ca251-6227-4c68-a2dc-b1a0161eb9d6-kube-api-access-6469j\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.809836 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.825835 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.837905 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data" (OuterVolumeSpecName: "config-data") pod "572ca251-6227-4c68-a2dc-b1a0161eb9d6" (UID: "572ca251-6227-4c68-a2dc-b1a0161eb9d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.909820 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfldc\" (UniqueName: \"kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.909956 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.909984 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.910078 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.910103 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhqg6\" (UniqueName: \"kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6\") pod \"345b13de-06f8-47c7-a9e4-e18fa30835a3\" (UID: \"345b13de-06f8-47c7-a9e4-e18fa30835a3\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.910177 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.910291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom\") pod \"d869783f-f6de-42fc-8e42-a628d4b11262\" (UID: \"d869783f-f6de-42fc-8e42-a628d4b11262\") " Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.911508 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.911529 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.911539 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572ca251-6227-4c68-a2dc-b1a0161eb9d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.916133 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6" (OuterVolumeSpecName: "kube-api-access-mhqg6") pod "345b13de-06f8-47c7-a9e4-e18fa30835a3" (UID: "345b13de-06f8-47c7-a9e4-e18fa30835a3"). InnerVolumeSpecName "kube-api-access-mhqg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.916162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.916189 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc" (OuterVolumeSpecName: "kube-api-access-xfldc") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "kube-api-access-xfldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.954792 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:22 crc kubenswrapper[4809]: I0226 14:46:22.976423 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.006148 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014137 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfldc\" (UniqueName: \"kubernetes.io/projected/d869783f-f6de-42fc-8e42-a628d4b11262-kube-api-access-xfldc\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014188 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014197 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014207 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhqg6\" (UniqueName: \"kubernetes.io/projected/345b13de-06f8-47c7-a9e4-e18fa30835a3-kube-api-access-mhqg6\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014216 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.014224 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.017173 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data" (OuterVolumeSpecName: "config-data") pod "d869783f-f6de-42fc-8e42-a628d4b11262" (UID: "d869783f-f6de-42fc-8e42-a628d4b11262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.120658 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d869783f-f6de-42fc-8e42-a628d4b11262-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.129588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" event={"ID":"d869783f-f6de-42fc-8e42-a628d4b11262","Type":"ContainerDied","Data":"701de7f488cabd1c14b39cef7f344847cabcc67c0a7d3f5ec892d908d7c90644"} Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.129649 4809 scope.go:117] "RemoveContainer" containerID="4146d83c51e3980648ffe05bce0df5553fe95e0d540c73b68ed51d17402dd07e" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.129804 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54ff6f8d67-p4qrr" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.135201 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-ghpjl"] Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.137631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b8c5684b6-nfc98" event={"ID":"572ca251-6227-4c68-a2dc-b1a0161eb9d6","Type":"ContainerDied","Data":"df1efd5e682cd1fa203efae2ed63c683085ee3b835a458ef7eea8189ab842624"} Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.137691 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b8c5684b6-nfc98" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.146114 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f1b541d8-7c08-42e8-831b-6e3d7262277a","Type":"ContainerStarted","Data":"b5255e0aa39ac2bd5cca3ced45c36389926bf82cbcd66cf9dd2c1888a5fb35c6"} Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.146579 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.151116 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535286-gswpg" event={"ID":"345b13de-06f8-47c7-a9e4-e18fa30835a3","Type":"ContainerDied","Data":"7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a"} Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.151153 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c20803ecb7155367eb367e6c5f2619618a6776fdc6ef3fabb8ed4c9ea51647a" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.151205 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535286-gswpg" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.183461 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535280-ghpjl"] Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.208949 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=40.208930004 podStartE2EDuration="40.208930004s" podCreationTimestamp="2026-02-26 14:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:46:23.174461266 +0000 UTC m=+1961.647781789" watchObservedRunningTime="2026-02-26 14:46:23.208930004 +0000 UTC m=+1961.682250527" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.227615 4809 scope.go:117] "RemoveContainer" containerID="c11dd4f1fafd681e716d3e90db1f66e79d9958e4daa1c1d5c257a66f7645782c" Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.233123 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.244840 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54ff6f8d67-p4qrr"] Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.255491 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:46:23 crc kubenswrapper[4809]: I0226 14:46:23.265846 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5b8c5684b6-nfc98"] Feb 26 14:46:24 crc kubenswrapper[4809]: I0226 14:46:24.274156 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afcfb36-d52b-43b1-9abc-59e0242c83f1" path="/var/lib/kubelet/pods/1afcfb36-d52b-43b1-9abc-59e0242c83f1/volumes" Feb 26 14:46:24 crc kubenswrapper[4809]: I0226 14:46:24.279059 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" path="/var/lib/kubelet/pods/572ca251-6227-4c68-a2dc-b1a0161eb9d6/volumes" Feb 26 14:46:24 crc kubenswrapper[4809]: I0226 14:46:24.281094 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" path="/var/lib/kubelet/pods/d869783f-f6de-42fc-8e42-a628d4b11262/volumes" Feb 26 14:46:28 crc kubenswrapper[4809]: E0226 14:46:28.113090 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:28 crc kubenswrapper[4809]: E0226 14:46:28.115228 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:28 crc kubenswrapper[4809]: E0226 14:46:28.116751 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 26 14:46:28 crc kubenswrapper[4809]: E0226 14:46:28.116792 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5cdd964fc5-s4bsx" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" Feb 26 14:46:30 crc kubenswrapper[4809]: I0226 14:46:30.257180 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:46:30 crc kubenswrapper[4809]: E0226 14:46:30.257688 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:46:31 crc kubenswrapper[4809]: I0226 14:46:31.449253 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 14:46:33 crc kubenswrapper[4809]: I0226 14:46:33.772909 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 14:46:34 crc kubenswrapper[4809]: I0226 14:46:34.094192 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f1b541d8-7c08-42e8-831b-6e3d7262277a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Feb 26 14:46:34 crc kubenswrapper[4809]: I0226 14:46:34.351986 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" exitCode=0 Feb 26 14:46:34 crc kubenswrapper[4809]: I0226 14:46:34.352046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cdd964fc5-s4bsx" event={"ID":"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1","Type":"ContainerDied","Data":"5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723"} Feb 26 14:46:37 crc kubenswrapper[4809]: E0226 14:46:37.093056 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Feb 26 14:46:37 crc kubenswrapper[4809]: E0226 14:46:37.093768 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Feb 26 14:46:37 crc kubenswrapper[4809]: E0226 14:46:37.093923 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hf58c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-nww5r_openstack(9b8e6711-9d3f-4961-84c5-defbf691d665): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 14:46:37 crc kubenswrapper[4809]: E0226 14:46:37.095476 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-nww5r" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" Feb 26 14:46:37 crc kubenswrapper[4809]: E0226 14:46:37.393475 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-nww5r" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.604570 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.770185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle\") pod \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.770475 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data\") pod \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.770743 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom\") pod \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.771050 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcxx9\" (UniqueName: \"kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9\") pod \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\" (UID: \"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1\") " Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.778036 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" (UID: "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.779278 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9" (OuterVolumeSpecName: "kube-api-access-xcxx9") pod "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" (UID: "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1"). InnerVolumeSpecName "kube-api-access-xcxx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.807112 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" (UID: "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.833802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data" (OuterVolumeSpecName: "config-data") pod "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" (UID: "ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.873901 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.873931 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcxx9\" (UniqueName: \"kubernetes.io/projected/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-kube-api-access-xcxx9\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.873941 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:37 crc kubenswrapper[4809]: I0226 14:46:37.873950 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.406878 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" event={"ID":"b26ec76a-b3e0-4564-a225-0f7fe176f3e4","Type":"ContainerStarted","Data":"f42e40d27fcf6cc25e22a605f8d847d5029c046c0977a72cd1dbb25f09d3a2b7"} Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.408891 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cdd964fc5-s4bsx" event={"ID":"ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1","Type":"ContainerDied","Data":"61f51b68d27ebbd4a425b8a3eb438255b08ea35c21db09db488d47231ce08290"} Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.409060 4809 scope.go:117] "RemoveContainer" containerID="5654acca22783870d3389bd6b95c40229c15e5ec25714e4ea53effc7cdf4d723" Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.409283 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cdd964fc5-s4bsx" Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.438585 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" podStartSLOduration=2.717762609 podStartE2EDuration="22.438559324s" podCreationTimestamp="2026-02-26 14:46:16 +0000 UTC" firstStartedPulling="2026-02-26 14:46:17.390947256 +0000 UTC m=+1955.864267779" lastFinishedPulling="2026-02-26 14:46:37.111743951 +0000 UTC m=+1975.585064494" observedRunningTime="2026-02-26 14:46:38.433280663 +0000 UTC m=+1976.906601236" watchObservedRunningTime="2026-02-26 14:46:38.438559324 +0000 UTC m=+1976.911879877" Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.467181 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:46:38 crc kubenswrapper[4809]: I0226 14:46:38.477045 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5cdd964fc5-s4bsx"] Feb 26 14:46:40 crc kubenswrapper[4809]: I0226 14:46:40.278397 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" path="/var/lib/kubelet/pods/ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1/volumes" Feb 26 14:46:41 crc kubenswrapper[4809]: I0226 14:46:41.257964 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:46:41 crc kubenswrapper[4809]: E0226 14:46:41.258982 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:46:44 crc kubenswrapper[4809]: I0226 14:46:44.094196 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 26 14:46:44 crc kubenswrapper[4809]: I0226 14:46:44.162229 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:48 crc kubenswrapper[4809]: I0226 14:46:48.545998 4809 generic.go:334] "Generic (PLEG): container finished" podID="b26ec76a-b3e0-4564-a225-0f7fe176f3e4" containerID="f42e40d27fcf6cc25e22a605f8d847d5029c046c0977a72cd1dbb25f09d3a2b7" exitCode=0 Feb 26 14:46:48 crc kubenswrapper[4809]: I0226 14:46:48.546356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" event={"ID":"b26ec76a-b3e0-4564-a225-0f7fe176f3e4","Type":"ContainerDied","Data":"f42e40d27fcf6cc25e22a605f8d847d5029c046c0977a72cd1dbb25f09d3a2b7"} Feb 26 14:46:49 crc kubenswrapper[4809]: I0226 14:46:49.208452 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="rabbitmq" containerID="cri-o://6f833ff09ea76db7e9202047d2b9b7ee2a7139bd4a486382e571e914dc3b411d" gracePeriod=604795 Feb 26 14:46:49 crc kubenswrapper[4809]: I0226 14:46:49.235359 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.349497 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.507241 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jhnb\" (UniqueName: \"kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb\") pod \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.507680 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory\") pod \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.507757 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle\") pod \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.507874 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") pod \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.515446 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb" (OuterVolumeSpecName: "kube-api-access-2jhnb") pod "b26ec76a-b3e0-4564-a225-0f7fe176f3e4" (UID: "b26ec76a-b3e0-4564-a225-0f7fe176f3e4"). InnerVolumeSpecName "kube-api-access-2jhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.523203 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b26ec76a-b3e0-4564-a225-0f7fe176f3e4" (UID: "b26ec76a-b3e0-4564-a225-0f7fe176f3e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.544393 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam podName:b26ec76a-b3e0-4564-a225-0f7fe176f3e4 nodeName:}" failed. No retries permitted until 2026-02-26 14:46:51.044366873 +0000 UTC m=+1989.517687396 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam") pod "b26ec76a-b3e0-4564-a225-0f7fe176f3e4" (UID: "b26ec76a-b3e0-4564-a225-0f7fe176f3e4") : error deleting /var/lib/kubelet/pods/b26ec76a-b3e0-4564-a225-0f7fe176f3e4/volume-subpaths: remove /var/lib/kubelet/pods/b26ec76a-b3e0-4564-a225-0f7fe176f3e4/volume-subpaths: no such file or directory Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.547785 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory" (OuterVolumeSpecName: "inventory") pod "b26ec76a-b3e0-4564-a225-0f7fe176f3e4" (UID: "b26ec76a-b3e0-4564-a225-0f7fe176f3e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.572747 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" event={"ID":"b26ec76a-b3e0-4564-a225-0f7fe176f3e4","Type":"ContainerDied","Data":"25e25b74a42ed3237ebf87ecc06a97e593c128732268877af8b873945369863d"} Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.572786 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e25b74a42ed3237ebf87ecc06a97e593c128732268877af8b873945369863d" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.572802 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.611064 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.611106 4809 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.611123 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jhnb\" (UniqueName: \"kubernetes.io/projected/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-kube-api-access-2jhnb\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.665466 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz"] Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.665964 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26ec76a-b3e0-4564-a225-0f7fe176f3e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.665981 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26ec76a-b3e0-4564-a225-0f7fe176f3e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.666022 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666028 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.666036 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerName="heat-api" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666042 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerName="heat-api" Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.666070 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" containerName="oc" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666076 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" containerName="oc" Feb 26 14:46:50 crc kubenswrapper[4809]: E0226 14:46:50.666088 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" containerName="heat-cfnapi" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666095 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" containerName="heat-cfnapi" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666360 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d869783f-f6de-42fc-8e42-a628d4b11262" containerName="heat-cfnapi" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666388 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" containerName="oc" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666397 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="572ca251-6227-4c68-a2dc-b1a0161eb9d6" containerName="heat-api" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666406 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26ec76a-b3e0-4564-a225-0f7fe176f3e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.666423 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8e2cdb-7c92-4bbd-b966-1471f44bc5c1" containerName="heat-engine" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.667216 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.685503 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz"] Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.817065 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.817157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.817367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x577p\" (UniqueName: \"kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.919810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.919893 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.920040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x577p\" (UniqueName: \"kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.923588 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.925611 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.938865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x577p\" (UniqueName: \"kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-n6wjz\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:50 crc kubenswrapper[4809]: I0226 14:46:50.995858 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:51 crc kubenswrapper[4809]: I0226 14:46:51.123740 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") pod \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\" (UID: \"b26ec76a-b3e0-4564-a225-0f7fe176f3e4\") " Feb 26 14:46:51 crc kubenswrapper[4809]: I0226 14:46:51.129659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b26ec76a-b3e0-4564-a225-0f7fe176f3e4" (UID: "b26ec76a-b3e0-4564-a225-0f7fe176f3e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:51 crc kubenswrapper[4809]: I0226 14:46:51.227348 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b26ec76a-b3e0-4564-a225-0f7fe176f3e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:51 crc kubenswrapper[4809]: I0226 14:46:51.602276 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz"] Feb 26 14:46:51 crc kubenswrapper[4809]: W0226 14:46:51.611175 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83288dad_14b0_4e58_b07f_4006eddbbfe6.slice/crio-dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6 WatchSource:0}: Error finding container dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6: Status 404 returned error can't find the container with id dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6 Feb 26 14:46:52 crc kubenswrapper[4809]: I0226 14:46:52.265813 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:46:52 crc kubenswrapper[4809]: E0226 14:46:52.266400 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:46:52 crc kubenswrapper[4809]: I0226 14:46:52.596211 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" event={"ID":"83288dad-14b0-4e58-b07f-4006eddbbfe6","Type":"ContainerStarted","Data":"ba3841f793847ffb61d3312cca63e3120072cd1313b6ac66e5237873a403e295"} Feb 26 14:46:52 crc kubenswrapper[4809]: I0226 14:46:52.596559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" event={"ID":"83288dad-14b0-4e58-b07f-4006eddbbfe6","Type":"ContainerStarted","Data":"dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6"} Feb 26 14:46:52 crc kubenswrapper[4809]: I0226 14:46:52.619448 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" podStartSLOduration=2.148774822 podStartE2EDuration="2.61942737s" podCreationTimestamp="2026-02-26 14:46:50 +0000 UTC" firstStartedPulling="2026-02-26 14:46:51.614579843 +0000 UTC m=+1990.087900366" lastFinishedPulling="2026-02-26 14:46:52.085232391 +0000 UTC m=+1990.558552914" observedRunningTime="2026-02-26 14:46:52.618375649 +0000 UTC m=+1991.091696212" watchObservedRunningTime="2026-02-26 14:46:52.61942737 +0000 UTC m=+1991.092747893" Feb 26 14:46:53 crc kubenswrapper[4809]: I0226 14:46:53.463804 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 14:46:54 crc kubenswrapper[4809]: I0226 14:46:54.626441 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nww5r" event={"ID":"9b8e6711-9d3f-4961-84c5-defbf691d665","Type":"ContainerStarted","Data":"201ba42ac0c0cc0d72c2668a279ca6cc31c2e002c071969cdd700216b3313e2f"} Feb 26 14:46:54 crc kubenswrapper[4809]: I0226 14:46:54.662692 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-nww5r" podStartSLOduration=3.093916803 podStartE2EDuration="36.662674683s" podCreationTimestamp="2026-02-26 14:46:18 +0000 UTC" firstStartedPulling="2026-02-26 14:46:19.892145544 +0000 UTC m=+1958.365466067" lastFinishedPulling="2026-02-26 14:46:53.460903424 +0000 UTC m=+1991.934223947" observedRunningTime="2026-02-26 14:46:54.65278902 +0000 UTC m=+1993.126109543" watchObservedRunningTime="2026-02-26 14:46:54.662674683 +0000 UTC m=+1993.135995206" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.643757 4809 generic.go:334] "Generic (PLEG): container finished" podID="32357a81-452d-4c32-8ac2-129d23b8c843" containerID="6f833ff09ea76db7e9202047d2b9b7ee2a7139bd4a486382e571e914dc3b411d" exitCode=0 Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.644183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerDied","Data":"6f833ff09ea76db7e9202047d2b9b7ee2a7139bd4a486382e571e914dc3b411d"} Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.645557 4809 generic.go:334] "Generic (PLEG): container finished" podID="83288dad-14b0-4e58-b07f-4006eddbbfe6" containerID="ba3841f793847ffb61d3312cca63e3120072cd1313b6ac66e5237873a403e295" exitCode=0 Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.645592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" event={"ID":"83288dad-14b0-4e58-b07f-4006eddbbfe6","Type":"ContainerDied","Data":"ba3841f793847ffb61d3312cca63e3120072cd1313b6ac66e5237873a403e295"} Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.859633 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958815 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958857 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz8l9\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958899 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958932 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.958993 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.959128 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.959194 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.959231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.959301 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.959436 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info\") pod \"32357a81-452d-4c32-8ac2-129d23b8c843\" (UID: \"32357a81-452d-4c32-8ac2-129d23b8c843\") " Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.960515 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.972602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.976457 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info" (OuterVolumeSpecName: "pod-info") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.977447 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.984581 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9" (OuterVolumeSpecName: "kube-api-access-kz8l9") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "kube-api-access-kz8l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.984753 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:55 crc kubenswrapper[4809]: I0226 14:46:55.987803 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.027670 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data" (OuterVolumeSpecName: "config-data") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.030589 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a" (OuterVolumeSpecName: "persistence") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "pvc-0e081130-279c-4e5f-a140-0b6ccf29201a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070602 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") on node \"crc\" " Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070645 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070659 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz8l9\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-kube-api-access-kz8l9\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070856 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070870 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070882 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070897 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32357a81-452d-4c32-8ac2-129d23b8c843-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070929 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.070942 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32357a81-452d-4c32-8ac2-129d23b8c843-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.102688 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf" (OuterVolumeSpecName: "server-conf") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.117606 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.117772 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e081130-279c-4e5f-a140-0b6ccf29201a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a") on node "crc" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.173483 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.173519 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32357a81-452d-4c32-8ac2-129d23b8c843-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.188258 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "32357a81-452d-4c32-8ac2-129d23b8c843" (UID: "32357a81-452d-4c32-8ac2-129d23b8c843"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.275290 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32357a81-452d-4c32-8ac2-129d23b8c843-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.660248 4809 generic.go:334] "Generic (PLEG): container finished" podID="9b8e6711-9d3f-4961-84c5-defbf691d665" containerID="201ba42ac0c0cc0d72c2668a279ca6cc31c2e002c071969cdd700216b3313e2f" exitCode=0 Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.660345 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nww5r" event={"ID":"9b8e6711-9d3f-4961-84c5-defbf691d665","Type":"ContainerDied","Data":"201ba42ac0c0cc0d72c2668a279ca6cc31c2e002c071969cdd700216b3313e2f"} Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.664117 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"32357a81-452d-4c32-8ac2-129d23b8c843","Type":"ContainerDied","Data":"7dc581f248432881e590539ff2e3e243aec323dd954bc34de064ad69cb4016b1"} Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.664172 4809 scope.go:117] "RemoveContainer" containerID="6f833ff09ea76db7e9202047d2b9b7ee2a7139bd4a486382e571e914dc3b411d" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.664210 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.712571 4809 scope.go:117] "RemoveContainer" containerID="9d4e9f94eba27283b34ce01c7f379079b4b8e5018367754a17160346d861d189" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.715339 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.731333 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.786393 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:56 crc kubenswrapper[4809]: E0226 14:46:56.795215 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="setup-container" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.795259 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="setup-container" Feb 26 14:46:56 crc kubenswrapper[4809]: E0226 14:46:56.795319 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="rabbitmq" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.795328 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="rabbitmq" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.796865 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" containerName="rabbitmq" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.843569 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:46:56 crc kubenswrapper[4809]: I0226 14:46:56.891596 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010519 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010580 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010640 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4hq\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-kube-api-access-dq4hq\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010820 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.010857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.011144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-config-data\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.011229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.011263 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44f15062-69d5-4f5c-a51c-3c0f75700b52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.011326 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.011367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44f15062-69d5-4f5c-a51c-3c0f75700b52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113155 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113239 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113333 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-config-data\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113383 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113411 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44f15062-69d5-4f5c-a51c-3c0f75700b52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44f15062-69d5-4f5c-a51c-3c0f75700b52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113553 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113576 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.113602 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4hq\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-kube-api-access-dq4hq\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.114435 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.114904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.115057 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-config-data\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.115555 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.116111 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44f15062-69d5-4f5c-a51c-3c0f75700b52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.118489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44f15062-69d5-4f5c-a51c-3c0f75700b52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.119556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44f15062-69d5-4f5c-a51c-3c0f75700b52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.120387 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.120425 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1be914a20758f707d6b14a059f5596264bd58434ad39af7f125013f388c0c9c1/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.123469 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.124475 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.140733 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4hq\" (UniqueName: \"kubernetes.io/projected/44f15062-69d5-4f5c-a51c-3c0f75700b52-kube-api-access-dq4hq\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.207445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e081130-279c-4e5f-a140-0b6ccf29201a\") pod \"rabbitmq-server-1\" (UID: \"44f15062-69d5-4f5c-a51c-3c0f75700b52\") " pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.366414 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.487773 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.521353 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory\") pod \"83288dad-14b0-4e58-b07f-4006eddbbfe6\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.521672 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam\") pod \"83288dad-14b0-4e58-b07f-4006eddbbfe6\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.521768 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x577p\" (UniqueName: \"kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p\") pod \"83288dad-14b0-4e58-b07f-4006eddbbfe6\" (UID: \"83288dad-14b0-4e58-b07f-4006eddbbfe6\") " Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.526256 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p" (OuterVolumeSpecName: "kube-api-access-x577p") pod "83288dad-14b0-4e58-b07f-4006eddbbfe6" (UID: "83288dad-14b0-4e58-b07f-4006eddbbfe6"). InnerVolumeSpecName "kube-api-access-x577p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.560689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory" (OuterVolumeSpecName: "inventory") pod "83288dad-14b0-4e58-b07f-4006eddbbfe6" (UID: "83288dad-14b0-4e58-b07f-4006eddbbfe6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.567281 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83288dad-14b0-4e58-b07f-4006eddbbfe6" (UID: "83288dad-14b0-4e58-b07f-4006eddbbfe6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.624481 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x577p\" (UniqueName: \"kubernetes.io/projected/83288dad-14b0-4e58-b07f-4006eddbbfe6-kube-api-access-x577p\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.624517 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.624530 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83288dad-14b0-4e58-b07f-4006eddbbfe6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.701200 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.702253 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-n6wjz" event={"ID":"83288dad-14b0-4e58-b07f-4006eddbbfe6","Type":"ContainerDied","Data":"dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6"} Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.702312 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc3dcf8f787c9e5a52d4367e12299e4c5aa10b90a035916a597ae52e2411d1d6" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.785159 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj"] Feb 26 14:46:57 crc kubenswrapper[4809]: E0226 14:46:57.786448 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83288dad-14b0-4e58-b07f-4006eddbbfe6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.786467 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="83288dad-14b0-4e58-b07f-4006eddbbfe6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.786822 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="83288dad-14b0-4e58-b07f-4006eddbbfe6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.788681 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.798615 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.798673 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.798878 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.799044 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.822322 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj"] Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.935559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.935834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.936074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgt9\" (UniqueName: \"kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:57 crc kubenswrapper[4809]: I0226 14:46:57.936094 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.051818 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwgt9\" (UniqueName: \"kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.051872 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.051957 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.051996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.060439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.061701 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.062209 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.081668 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwgt9\" (UniqueName: \"kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.111632 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.148428 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.215846 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.257657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts\") pod \"9b8e6711-9d3f-4961-84c5-defbf691d665\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.257979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data\") pod \"9b8e6711-9d3f-4961-84c5-defbf691d665\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.258102 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle\") pod \"9b8e6711-9d3f-4961-84c5-defbf691d665\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.258173 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf58c\" (UniqueName: \"kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c\") pod \"9b8e6711-9d3f-4961-84c5-defbf691d665\" (UID: \"9b8e6711-9d3f-4961-84c5-defbf691d665\") " Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.261541 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c" (OuterVolumeSpecName: "kube-api-access-hf58c") pod "9b8e6711-9d3f-4961-84c5-defbf691d665" (UID: "9b8e6711-9d3f-4961-84c5-defbf691d665"). InnerVolumeSpecName "kube-api-access-hf58c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.262456 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts" (OuterVolumeSpecName: "scripts") pod "9b8e6711-9d3f-4961-84c5-defbf691d665" (UID: "9b8e6711-9d3f-4961-84c5-defbf691d665"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.274503 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32357a81-452d-4c32-8ac2-129d23b8c843" path="/var/lib/kubelet/pods/32357a81-452d-4c32-8ac2-129d23b8c843/volumes" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.303756 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data" (OuterVolumeSpecName: "config-data") pod "9b8e6711-9d3f-4961-84c5-defbf691d665" (UID: "9b8e6711-9d3f-4961-84c5-defbf691d665"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.306376 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8e6711-9d3f-4961-84c5-defbf691d665" (UID: "9b8e6711-9d3f-4961-84c5-defbf691d665"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.361427 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.361473 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.361485 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf58c\" (UniqueName: \"kubernetes.io/projected/9b8e6711-9d3f-4961-84c5-defbf691d665-kube-api-access-hf58c\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.361495 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e6711-9d3f-4961-84c5-defbf691d665-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:46:58 crc kubenswrapper[4809]: W0226 14:46:58.709273 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2d7dc7_59ac_4b40_9a53_6f1a26eceb47.slice/crio-c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23 WatchSource:0}: Error finding container c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23: Status 404 returned error can't find the container with id c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23 Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.721423 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj"] Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.723445 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nww5r" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.724461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nww5r" event={"ID":"9b8e6711-9d3f-4961-84c5-defbf691d665","Type":"ContainerDied","Data":"22b1df558ff9a5df420e694906a8be2f3a02408f0e6c3db971ec34ce2849353b"} Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.724499 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b1df558ff9a5df420e694906a8be2f3a02408f0e6c3db971ec34ce2849353b" Feb 26 14:46:58 crc kubenswrapper[4809]: I0226 14:46:58.726335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"44f15062-69d5-4f5c-a51c-3c0f75700b52","Type":"ContainerStarted","Data":"f702d2352a3a87cb5229fcca2800d85182c9e3d2871591f529c9d7d36d9ed9cb"} Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.100584 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.100860 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-api" containerID="cri-o://eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c" gracePeriod=30 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.100918 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-listener" containerID="cri-o://04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc" gracePeriod=30 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.101029 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-evaluator" containerID="cri-o://dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059" gracePeriod=30 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.100918 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-notifier" containerID="cri-o://780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9" gracePeriod=30 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.751613 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" event={"ID":"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47","Type":"ContainerStarted","Data":"c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23"} Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.759200 4809 generic.go:334] "Generic (PLEG): container finished" podID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerID="dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059" exitCode=0 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.759246 4809 generic.go:334] "Generic (PLEG): container finished" podID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerID="eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c" exitCode=0 Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.759271 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerDied","Data":"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059"} Feb 26 14:46:59 crc kubenswrapper[4809]: I0226 14:46:59.759301 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerDied","Data":"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c"} Feb 26 14:47:00 crc kubenswrapper[4809]: I0226 14:47:00.771943 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" event={"ID":"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47","Type":"ContainerStarted","Data":"7c53292a42b710e6609f30b613b537b6acaa73b3dc8b0d12c774e79618ea8f23"} Feb 26 14:47:00 crc kubenswrapper[4809]: I0226 14:47:00.775556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"44f15062-69d5-4f5c-a51c-3c0f75700b52","Type":"ContainerStarted","Data":"339fcfce2250c2cbdd3beadddf2edab39c1441ab951b20a75db8b76c27c593ef"} Feb 26 14:47:00 crc kubenswrapper[4809]: I0226 14:47:00.802661 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" podStartSLOduration=3.371637497 podStartE2EDuration="3.802645368s" podCreationTimestamp="2026-02-26 14:46:57 +0000 UTC" firstStartedPulling="2026-02-26 14:46:58.715091455 +0000 UTC m=+1997.188411978" lastFinishedPulling="2026-02-26 14:46:59.146099326 +0000 UTC m=+1997.619419849" observedRunningTime="2026-02-26 14:47:00.791797867 +0000 UTC m=+1999.265118390" watchObservedRunningTime="2026-02-26 14:47:00.802645368 +0000 UTC m=+1999.275965891" Feb 26 14:47:03 crc kubenswrapper[4809]: I0226 14:47:03.855880 4809 generic.go:334] "Generic (PLEG): container finished" podID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerID="780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9" exitCode=0 Feb 26 14:47:03 crc kubenswrapper[4809]: I0226 14:47:03.855958 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerDied","Data":"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9"} Feb 26 14:47:04 crc kubenswrapper[4809]: I0226 14:47:04.256912 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:47:04 crc kubenswrapper[4809]: E0226 14:47:04.257307 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.612622 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.777644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.778026 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.778156 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.778188 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.778242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc9bn\" (UniqueName: \"kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.778283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data\") pod \"706edc08-ac4a-45bc-9fbc-78c486ecd636\" (UID: \"706edc08-ac4a-45bc-9fbc-78c486ecd636\") " Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.785592 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts" (OuterVolumeSpecName: "scripts") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.788511 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn" (OuterVolumeSpecName: "kube-api-access-qc9bn") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "kube-api-access-qc9bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.866316 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.883009 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc9bn\" (UniqueName: \"kubernetes.io/projected/706edc08-ac4a-45bc-9fbc-78c486ecd636-kube-api-access-qc9bn\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.883078 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.883091 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.891774 4809 generic.go:334] "Generic (PLEG): container finished" podID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerID="04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc" exitCode=0 Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.891828 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.891836 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerDied","Data":"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc"} Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.892080 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"706edc08-ac4a-45bc-9fbc-78c486ecd636","Type":"ContainerDied","Data":"d330a5f592452598d558076feca30b6f048029e5a318c8b6324e947abab54810"} Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.892124 4809 scope.go:117] "RemoveContainer" containerID="04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.912220 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.937484 4809 scope.go:117] "RemoveContainer" containerID="780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.964832 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.977004 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data" (OuterVolumeSpecName: "config-data") pod "706edc08-ac4a-45bc-9fbc-78c486ecd636" (UID: "706edc08-ac4a-45bc-9fbc-78c486ecd636"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.985903 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.985938 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.985946 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/706edc08-ac4a-45bc-9fbc-78c486ecd636-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:05 crc kubenswrapper[4809]: I0226 14:47:05.989982 4809 scope.go:117] "RemoveContainer" containerID="dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.046146 4809 scope.go:117] "RemoveContainer" containerID="eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.080888 4809 scope.go:117] "RemoveContainer" containerID="04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.081356 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc\": container with ID starting with 04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc not found: ID does not exist" containerID="04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.081388 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc"} err="failed to get container status \"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc\": rpc error: code = NotFound desc = could not find container \"04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc\": container with ID starting with 04d947e13d3563334148aa95d86047afec19af8f02bf970d6c392c7d36d26acc not found: ID does not exist" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.081413 4809 scope.go:117] "RemoveContainer" containerID="780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.081759 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9\": container with ID starting with 780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9 not found: ID does not exist" containerID="780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.081786 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9"} err="failed to get container status \"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9\": rpc error: code = NotFound desc = could not find container \"780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9\": container with ID starting with 780069ee97440046e93faaae21f1f76adc5af7ea67ce8d7a0c0449cd5360b0f9 not found: ID does not exist" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.081800 4809 scope.go:117] "RemoveContainer" containerID="dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.082004 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059\": container with ID starting with dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059 not found: ID does not exist" containerID="dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.082036 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059"} err="failed to get container status \"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059\": rpc error: code = NotFound desc = could not find container \"dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059\": container with ID starting with dad5de0e61a8ea416c01c97553a3fb1966ade7f6de3c37783998d9e56ab20059 not found: ID does not exist" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.082053 4809 scope.go:117] "RemoveContainer" containerID="eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.082432 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c\": container with ID starting with eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c not found: ID does not exist" containerID="eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.082453 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c"} err="failed to get container status \"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c\": rpc error: code = NotFound desc = could not find container \"eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c\": container with ID starting with eb7592fcb173ffb5b4d3acec54116e5e9472c926357eec5a0535c619a0ce1e9c not found: ID does not exist" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.280962 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.298927 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.317578 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.318199 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-notifier" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318224 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-notifier" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.318267 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-api" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318278 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-api" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.318301 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-listener" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318310 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-listener" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.318342 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-evaluator" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318351 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-evaluator" Feb 26 14:47:06 crc kubenswrapper[4809]: E0226 14:47:06.318370 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" containerName="aodh-db-sync" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318378 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" containerName="aodh-db-sync" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318731 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-api" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318768 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" containerName="aodh-db-sync" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318791 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-listener" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318805 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-notifier" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.318826 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" containerName="aodh-evaluator" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.321429 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.325101 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.325124 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.325385 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.327952 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.328085 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-6p9fd" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.336826 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.503388 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-scripts\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.503767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.503857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.504320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-public-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.504392 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-config-data\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.504444 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5dx\" (UniqueName: \"kubernetes.io/projected/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-kube-api-access-zw5dx\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609353 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-public-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609397 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-config-data\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5dx\" (UniqueName: \"kubernetes.io/projected/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-kube-api-access-zw5dx\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609518 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-scripts\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.609652 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.616096 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-scripts\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.618497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-config-data\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.620532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.620859 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.621559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-public-tls-certs\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.653597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5dx\" (UniqueName: \"kubernetes.io/projected/b1c32e51-c938-4ba2-937a-b57e26cfd0a1-kube-api-access-zw5dx\") pod \"aodh-0\" (UID: \"b1c32e51-c938-4ba2-937a-b57e26cfd0a1\") " pod="openstack/aodh-0" Feb 26 14:47:06 crc kubenswrapper[4809]: I0226 14:47:06.944753 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 26 14:47:07 crc kubenswrapper[4809]: I0226 14:47:07.189415 4809 scope.go:117] "RemoveContainer" containerID="bc8342a3e828a5c979876227e1d13c982413de9d05a3c986824bf6ab5db373e3" Feb 26 14:47:07 crc kubenswrapper[4809]: I0226 14:47:07.221411 4809 scope.go:117] "RemoveContainer" containerID="a0b1a78fa0070b44aaf2fc3035ad2404387a97825086f49ce02092bb1ccb9262" Feb 26 14:47:07 crc kubenswrapper[4809]: I0226 14:47:07.287199 4809 scope.go:117] "RemoveContainer" containerID="d38c36ea01dbc600d606b20240fc7ee0ccf280a2cbde879bc96d854b156ff1d2" Feb 26 14:47:07 crc kubenswrapper[4809]: I0226 14:47:07.314856 4809 scope.go:117] "RemoveContainer" containerID="87a4ae544ed7684b8eedfe09829a2cd5d032fecb13092c2fe6c0ac13ffbc1176" Feb 26 14:47:07 crc kubenswrapper[4809]: I0226 14:47:07.381992 4809 scope.go:117] "RemoveContainer" containerID="09f6260937bad6e5eef607f4499c70afded060ec8faf07e3ddeafd4f431a2ac6" Feb 26 14:47:08 crc kubenswrapper[4809]: I0226 14:47:07.428172 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 26 14:47:08 crc kubenswrapper[4809]: I0226 14:47:07.445066 4809 scope.go:117] "RemoveContainer" containerID="f81bcff20e2b62765648f64ff5526a26729017c403c307eb7b2eef5f41d360d4" Feb 26 14:47:08 crc kubenswrapper[4809]: I0226 14:47:07.919776 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b1c32e51-c938-4ba2-937a-b57e26cfd0a1","Type":"ContainerStarted","Data":"296cc771404dcc40f363f77ecb6b9404972314b3e1ed457b78aa49b7030c572f"} Feb 26 14:47:08 crc kubenswrapper[4809]: I0226 14:47:08.275366 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706edc08-ac4a-45bc-9fbc-78c486ecd636" path="/var/lib/kubelet/pods/706edc08-ac4a-45bc-9fbc-78c486ecd636/volumes" Feb 26 14:47:08 crc kubenswrapper[4809]: I0226 14:47:08.932957 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b1c32e51-c938-4ba2-937a-b57e26cfd0a1","Type":"ContainerStarted","Data":"0dd166e38571216206ab00e7334c05eb36a438500d86a40c2fac5286e7dd305f"} Feb 26 14:47:09 crc kubenswrapper[4809]: I0226 14:47:09.944760 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b1c32e51-c938-4ba2-937a-b57e26cfd0a1","Type":"ContainerStarted","Data":"3ff226031e04155cb9cba4dfc4d96cb3af7524410f920c7f09c7fe45aacc4c4f"} Feb 26 14:47:10 crc kubenswrapper[4809]: I0226 14:47:10.982758 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b1c32e51-c938-4ba2-937a-b57e26cfd0a1","Type":"ContainerStarted","Data":"c7efdfc2866b23359f1869202ab6e9ecd22684d01a0d56e9dd9d1719f7111195"} Feb 26 14:47:12 crc kubenswrapper[4809]: I0226 14:47:12.004133 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b1c32e51-c938-4ba2-937a-b57e26cfd0a1","Type":"ContainerStarted","Data":"a499bdc843799b2ebeff61f39dc48998ab437d8aeab43ac9637bb05f71a44aac"} Feb 26 14:47:12 crc kubenswrapper[4809]: I0226 14:47:12.038570 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.151442067 podStartE2EDuration="6.038549621s" podCreationTimestamp="2026-02-26 14:47:06 +0000 UTC" firstStartedPulling="2026-02-26 14:47:07.456303237 +0000 UTC m=+2005.929623760" lastFinishedPulling="2026-02-26 14:47:11.343410791 +0000 UTC m=+2009.816731314" observedRunningTime="2026-02-26 14:47:12.038125939 +0000 UTC m=+2010.511446482" watchObservedRunningTime="2026-02-26 14:47:12.038549621 +0000 UTC m=+2010.511870144" Feb 26 14:47:18 crc kubenswrapper[4809]: I0226 14:47:18.257683 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:47:18 crc kubenswrapper[4809]: E0226 14:47:18.258885 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:47:32 crc kubenswrapper[4809]: I0226 14:47:32.241305 4809 generic.go:334] "Generic (PLEG): container finished" podID="44f15062-69d5-4f5c-a51c-3c0f75700b52" containerID="339fcfce2250c2cbdd3beadddf2edab39c1441ab951b20a75db8b76c27c593ef" exitCode=0 Feb 26 14:47:32 crc kubenswrapper[4809]: I0226 14:47:32.241393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"44f15062-69d5-4f5c-a51c-3c0f75700b52","Type":"ContainerDied","Data":"339fcfce2250c2cbdd3beadddf2edab39c1441ab951b20a75db8b76c27c593ef"} Feb 26 14:47:33 crc kubenswrapper[4809]: I0226 14:47:33.254774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"44f15062-69d5-4f5c-a51c-3c0f75700b52","Type":"ContainerStarted","Data":"3c4edf2f7c263459d24a4871dfc84e5825563e5a108d625be33ef30f948c02de"} Feb 26 14:47:33 crc kubenswrapper[4809]: I0226 14:47:33.255705 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 26 14:47:33 crc kubenswrapper[4809]: I0226 14:47:33.257911 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:47:33 crc kubenswrapper[4809]: E0226 14:47:33.258210 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:47:33 crc kubenswrapper[4809]: I0226 14:47:33.299801 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.29977745 podStartE2EDuration="37.29977745s" podCreationTimestamp="2026-02-26 14:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:47:33.2889574 +0000 UTC m=+2031.762277923" watchObservedRunningTime="2026-02-26 14:47:33.29977745 +0000 UTC m=+2031.773097973" Feb 26 14:47:47 crc kubenswrapper[4809]: I0226 14:47:47.492308 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 26 14:47:47 crc kubenswrapper[4809]: I0226 14:47:47.640805 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:47:48 crc kubenswrapper[4809]: I0226 14:47:48.257152 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:47:48 crc kubenswrapper[4809]: E0226 14:47:48.258070 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:47:52 crc kubenswrapper[4809]: I0226 14:47:52.080684 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="rabbitmq" containerID="cri-o://51f722f1046e756005d1581f2bc10ba9953b9f5810eb226613552aaf6604b683" gracePeriod=604796 Feb 26 14:47:58 crc kubenswrapper[4809]: I0226 14:47:58.576867 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerID="51f722f1046e756005d1581f2bc10ba9953b9f5810eb226613552aaf6604b683" exitCode=0 Feb 26 14:47:58 crc kubenswrapper[4809]: I0226 14:47:58.576949 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerDied","Data":"51f722f1046e756005d1581f2bc10ba9953b9f5810eb226613552aaf6604b683"} Feb 26 14:47:58 crc kubenswrapper[4809]: I0226 14:47:58.969889 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089424 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089589 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089677 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089827 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089874 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvdmx\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.089936 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.090152 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.090229 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.090252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.090595 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.090932 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.092538 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.092594 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.092629 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd\") pod \"a9e03baa-a568-46f3-90dc-ad3ad328567c\" (UID: \"a9e03baa-a568-46f3-90dc-ad3ad328567c\") " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.093785 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.093813 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.093828 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.097523 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info" (OuterVolumeSpecName: "pod-info") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.097614 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx" (OuterVolumeSpecName: "kube-api-access-pvdmx") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "kube-api-access-pvdmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.098420 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.102167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.116599 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f" (OuterVolumeSpecName: "persistence") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.153276 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data" (OuterVolumeSpecName: "config-data") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.196592 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e03baa-a568-46f3-90dc-ad3ad328567c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.197514 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvdmx\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-kube-api-access-pvdmx\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.197629 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") on node \"crc\" " Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.197704 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.197796 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.197863 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e03baa-a568-46f3-90dc-ad3ad328567c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.199553 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf" (OuterVolumeSpecName: "server-conf") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.304440 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e03baa-a568-46f3-90dc-ad3ad328567c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.374264 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a9e03baa-a568-46f3-90dc-ad3ad328567c" (UID: "a9e03baa-a568-46f3-90dc-ad3ad328567c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.410718 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e03baa-a568-46f3-90dc-ad3ad328567c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.459929 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.468605 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f") on node "crc" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.512728 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") on node \"crc\" DevicePath \"\"" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.596358 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e03baa-a568-46f3-90dc-ad3ad328567c","Type":"ContainerDied","Data":"a1b62e13af38a597415ae0ab25b6c8b2f8f881fb000bc1deaa3119fbaafa4683"} Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.596698 4809 scope.go:117] "RemoveContainer" containerID="51f722f1046e756005d1581f2bc10ba9953b9f5810eb226613552aaf6604b683" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.596442 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.651415 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.660145 4809 scope.go:117] "RemoveContainer" containerID="c841726f0c29effa2d9e38f839e1468c20a21a08bf986b3b1775e124fb367a95" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.667498 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.686390 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:47:59 crc kubenswrapper[4809]: E0226 14:47:59.687099 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="rabbitmq" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.687116 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="rabbitmq" Feb 26 14:47:59 crc kubenswrapper[4809]: E0226 14:47:59.687151 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="setup-container" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.687159 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="setup-container" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.687460 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" containerName="rabbitmq" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.689543 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.698693 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.833111 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.833182 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.833223 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-config-data\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.833590 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.833790 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834185 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkmk\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-kube-api-access-bbkmk\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834354 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834509 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.834836 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbkmk\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-kube-api-access-bbkmk\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936696 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936740 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936821 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936856 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936919 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.936982 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937027 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-config-data\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937141 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937184 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.937874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.938251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.938339 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-config-data\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.938608 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-server-conf\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.941481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-pod-info\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.942288 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.942314 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/317625664eb02b71305e33edf97c9510b04dffcc4948cb871909f59709469599/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.942822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.951744 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.956485 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:47:59 crc kubenswrapper[4809]: I0226 14:47:59.956819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbkmk\" (UniqueName: \"kubernetes.io/projected/20d71c1c-fd94-4fd4-b4b7-fd776b33e715-kube-api-access-bbkmk\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.006214 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1ad41b1-45a6-42be-96f1-97a601b3a79f\") pod \"rabbitmq-server-0\" (UID: \"20d71c1c-fd94-4fd4-b4b7-fd776b33e715\") " pod="openstack/rabbitmq-server-0" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.074765 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.168356 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535288-842tt"] Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.172320 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.174486 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.174717 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.175434 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.199712 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-842tt"] Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.245814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9p5\" (UniqueName: \"kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5\") pod \"auto-csr-approver-29535288-842tt\" (UID: \"0d0ebdaa-3d31-4715-bb02-241b564ad69c\") " pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.260985 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:48:00 crc kubenswrapper[4809]: E0226 14:48:00.261271 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.278075 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e03baa-a568-46f3-90dc-ad3ad328567c" path="/var/lib/kubelet/pods/a9e03baa-a568-46f3-90dc-ad3ad328567c/volumes" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.353818 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9p5\" (UniqueName: \"kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5\") pod \"auto-csr-approver-29535288-842tt\" (UID: \"0d0ebdaa-3d31-4715-bb02-241b564ad69c\") " pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.378048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9p5\" (UniqueName: \"kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5\") pod \"auto-csr-approver-29535288-842tt\" (UID: \"0d0ebdaa-3d31-4715-bb02-241b564ad69c\") " pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.519891 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:00 crc kubenswrapper[4809]: I0226 14:48:00.647161 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 14:48:01 crc kubenswrapper[4809]: W0226 14:48:01.076596 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d0ebdaa_3d31_4715_bb02_241b564ad69c.slice/crio-7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b WatchSource:0}: Error finding container 7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b: Status 404 returned error can't find the container with id 7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b Feb 26 14:48:01 crc kubenswrapper[4809]: I0226 14:48:01.079790 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-842tt"] Feb 26 14:48:01 crc kubenswrapper[4809]: I0226 14:48:01.622645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-842tt" event={"ID":"0d0ebdaa-3d31-4715-bb02-241b564ad69c","Type":"ContainerStarted","Data":"7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b"} Feb 26 14:48:01 crc kubenswrapper[4809]: I0226 14:48:01.623919 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20d71c1c-fd94-4fd4-b4b7-fd776b33e715","Type":"ContainerStarted","Data":"6ca67493a141ee8fd5d5d6144812da8f84cafb24f4858749645e26542fdfaad6"} Feb 26 14:48:02 crc kubenswrapper[4809]: I0226 14:48:02.696930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20d71c1c-fd94-4fd4-b4b7-fd776b33e715","Type":"ContainerStarted","Data":"747e901833b75dc2dfc86f0129765ff6aa429053dc1ef1a0afcca4ebe49804fb"} Feb 26 14:48:03 crc kubenswrapper[4809]: I0226 14:48:03.717460 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-842tt" event={"ID":"0d0ebdaa-3d31-4715-bb02-241b564ad69c","Type":"ContainerStarted","Data":"ad1d24417760e5b034492fd0f48b89f452925ca8ea477b5cf90f4a60013fa046"} Feb 26 14:48:03 crc kubenswrapper[4809]: I0226 14:48:03.737746 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535288-842tt" podStartSLOduration=2.561456993 podStartE2EDuration="3.737728391s" podCreationTimestamp="2026-02-26 14:48:00 +0000 UTC" firstStartedPulling="2026-02-26 14:48:01.080649606 +0000 UTC m=+2059.553970129" lastFinishedPulling="2026-02-26 14:48:02.256921004 +0000 UTC m=+2060.730241527" observedRunningTime="2026-02-26 14:48:03.732444569 +0000 UTC m=+2062.205765092" watchObservedRunningTime="2026-02-26 14:48:03.737728391 +0000 UTC m=+2062.211048914" Feb 26 14:48:04 crc kubenswrapper[4809]: I0226 14:48:04.765497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-842tt" event={"ID":"0d0ebdaa-3d31-4715-bb02-241b564ad69c","Type":"ContainerDied","Data":"ad1d24417760e5b034492fd0f48b89f452925ca8ea477b5cf90f4a60013fa046"} Feb 26 14:48:04 crc kubenswrapper[4809]: I0226 14:48:04.765651 4809 generic.go:334] "Generic (PLEG): container finished" podID="0d0ebdaa-3d31-4715-bb02-241b564ad69c" containerID="ad1d24417760e5b034492fd0f48b89f452925ca8ea477b5cf90f4a60013fa046" exitCode=0 Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.251166 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.418744 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w9p5\" (UniqueName: \"kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5\") pod \"0d0ebdaa-3d31-4715-bb02-241b564ad69c\" (UID: \"0d0ebdaa-3d31-4715-bb02-241b564ad69c\") " Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.426308 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5" (OuterVolumeSpecName: "kube-api-access-6w9p5") pod "0d0ebdaa-3d31-4715-bb02-241b564ad69c" (UID: "0d0ebdaa-3d31-4715-bb02-241b564ad69c"). InnerVolumeSpecName "kube-api-access-6w9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.522322 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w9p5\" (UniqueName: \"kubernetes.io/projected/0d0ebdaa-3d31-4715-bb02-241b564ad69c-kube-api-access-6w9p5\") on node \"crc\" DevicePath \"\"" Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.798841 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535288-842tt" event={"ID":"0d0ebdaa-3d31-4715-bb02-241b564ad69c","Type":"ContainerDied","Data":"7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b"} Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.798878 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7523894ad4533959ffa652107c1c4592effd295df2e0a55ed39d5fcab333fb9b" Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.798948 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535288-842tt" Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.822024 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-9wjps"] Feb 26 14:48:06 crc kubenswrapper[4809]: I0226 14:48:06.834310 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535282-9wjps"] Feb 26 14:48:08 crc kubenswrapper[4809]: I0226 14:48:08.272062 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcebfa0a-afe4-41c4-9812-988cbc677e95" path="/var/lib/kubelet/pods/fcebfa0a-afe4-41c4-9812-988cbc677e95/volumes" Feb 26 14:48:11 crc kubenswrapper[4809]: I0226 14:48:11.257659 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:48:11 crc kubenswrapper[4809]: E0226 14:48:11.258390 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:48:26 crc kubenswrapper[4809]: I0226 14:48:26.257188 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:48:27 crc kubenswrapper[4809]: I0226 14:48:27.070770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3"} Feb 26 14:48:35 crc kubenswrapper[4809]: I0226 14:48:35.184181 4809 generic.go:334] "Generic (PLEG): container finished" podID="20d71c1c-fd94-4fd4-b4b7-fd776b33e715" containerID="747e901833b75dc2dfc86f0129765ff6aa429053dc1ef1a0afcca4ebe49804fb" exitCode=0 Feb 26 14:48:35 crc kubenswrapper[4809]: I0226 14:48:35.184286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20d71c1c-fd94-4fd4-b4b7-fd776b33e715","Type":"ContainerDied","Data":"747e901833b75dc2dfc86f0129765ff6aa429053dc1ef1a0afcca4ebe49804fb"} Feb 26 14:48:36 crc kubenswrapper[4809]: I0226 14:48:36.199490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"20d71c1c-fd94-4fd4-b4b7-fd776b33e715","Type":"ContainerStarted","Data":"0fc2a04158b650779821195dc9ba1308d94281c26d3072f2660486e095595ab3"} Feb 26 14:48:36 crc kubenswrapper[4809]: I0226 14:48:36.201070 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 14:48:36 crc kubenswrapper[4809]: I0226 14:48:36.229851 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.229834662 podStartE2EDuration="37.229834662s" podCreationTimestamp="2026-02-26 14:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 14:48:36.223668755 +0000 UTC m=+2094.696989278" watchObservedRunningTime="2026-02-26 14:48:36.229834662 +0000 UTC m=+2094.703155185" Feb 26 14:48:50 crc kubenswrapper[4809]: I0226 14:48:50.078315 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 14:48:59 crc kubenswrapper[4809]: I0226 14:48:59.052591 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-b5tkr"] Feb 26 14:48:59 crc kubenswrapper[4809]: I0226 14:48:59.064369 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b727-account-create-update-m7lbg"] Feb 26 14:48:59 crc kubenswrapper[4809]: I0226 14:48:59.075408 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-b5tkr"] Feb 26 14:48:59 crc kubenswrapper[4809]: I0226 14:48:59.086370 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b727-account-create-update-m7lbg"] Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.064352 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a996-account-create-update-hlkf2"] Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.080232 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cjlv9"] Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.094230 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a996-account-create-update-hlkf2"] Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.109565 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cjlv9"] Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.280812 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ac330f-10c7-4cf8-8a22-0ad54c655091" path="/var/lib/kubelet/pods/70ac330f-10c7-4cf8-8a22-0ad54c655091/volumes" Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.283945 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cda5ba3-0335-4853-a084-c30c335e99ff" path="/var/lib/kubelet/pods/7cda5ba3-0335-4853-a084-c30c335e99ff/volumes" Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.289452 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8408ab37-9e60-4307-bd8d-1b1d9db3f539" path="/var/lib/kubelet/pods/8408ab37-9e60-4307-bd8d-1b1d9db3f539/volumes" Feb 26 14:49:00 crc kubenswrapper[4809]: I0226 14:49:00.291768 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9528bca-6e44-425a-8abe-9ecbed0b60d0" path="/var/lib/kubelet/pods/a9528bca-6e44-425a-8abe-9ecbed0b60d0/volumes" Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.066894 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wxkll"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.083328 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-7zml8"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.093831 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-87d4-account-create-update-zw2lg"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.104838 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wxkll"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.115639 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8s7lr"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.126770 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-87d4-account-create-update-zw2lg"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.136599 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-7zml8"] Feb 26 14:49:01 crc kubenswrapper[4809]: I0226 14:49:01.146033 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8s7lr"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.050029 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-q57fj"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.073317 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vsnnq"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.083857 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-88e3-account-create-update-r9tck"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.097996 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-q57fj"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.111708 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-88e3-account-create-update-r9tck"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.122866 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-vsnnq"] Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.292700 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388698b9-4d79-4309-94a1-d867b2dd8cdc" path="/var/lib/kubelet/pods/388698b9-4d79-4309-94a1-d867b2dd8cdc/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.295336 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ccf338c-7d94-4016-aa75-1986453f45a4" path="/var/lib/kubelet/pods/6ccf338c-7d94-4016-aa75-1986453f45a4/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.297097 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d992047-47b5-4e8f-8b23-9e87ceef8d70" path="/var/lib/kubelet/pods/6d992047-47b5-4e8f-8b23-9e87ceef8d70/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.297713 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84213e71-f500-4e4a-8a0a-123129d86cf4" path="/var/lib/kubelet/pods/84213e71-f500-4e4a-8a0a-123129d86cf4/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.301297 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e4ee77-6195-4e59-85b2-ff393dfe933e" path="/var/lib/kubelet/pods/b6e4ee77-6195-4e59-85b2-ff393dfe933e/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.302369 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbc3ad8-368d-42a5-ba41-2c89e8b0502a" path="/var/lib/kubelet/pods/dbbc3ad8-368d-42a5-ba41-2c89e8b0502a/volumes" Feb 26 14:49:02 crc kubenswrapper[4809]: I0226 14:49:02.304397 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de27bcc6-91a3-4610-9611-0f1d5065b8a7" path="/var/lib/kubelet/pods/de27bcc6-91a3-4610-9611-0f1d5065b8a7/volumes" Feb 26 14:49:03 crc kubenswrapper[4809]: I0226 14:49:03.047893 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-93c6-account-create-update-mkb9z"] Feb 26 14:49:03 crc kubenswrapper[4809]: I0226 14:49:03.058894 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-fc96-account-create-update-fp688"] Feb 26 14:49:03 crc kubenswrapper[4809]: I0226 14:49:03.070810 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-fc96-account-create-update-fp688"] Feb 26 14:49:03 crc kubenswrapper[4809]: I0226 14:49:03.082744 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-93c6-account-create-update-mkb9z"] Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.040792 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4865-account-create-update-qwlm4"] Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.051565 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4865-account-create-update-qwlm4"] Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.062030 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fbfbm"] Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.076781 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fbfbm"] Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.285341 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a2cf63-3d00-4de9-ae7e-c6d45402e573" path="/var/lib/kubelet/pods/15a2cf63-3d00-4de9-ae7e-c6d45402e573/volumes" Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.288538 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27297557-090e-4476-ae2c-266a0bb3fdb6" path="/var/lib/kubelet/pods/27297557-090e-4476-ae2c-266a0bb3fdb6/volumes" Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.289486 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c95b42-bbb4-4348-919d-82e14dccc8b6" path="/var/lib/kubelet/pods/98c95b42-bbb4-4348-919d-82e14dccc8b6/volumes" Feb 26 14:49:04 crc kubenswrapper[4809]: I0226 14:49:04.290408 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6ffefd-5f03-430c-a852-5a971a3959a2" path="/var/lib/kubelet/pods/bf6ffefd-5f03-430c-a852-5a971a3959a2/volumes" Feb 26 14:49:05 crc kubenswrapper[4809]: I0226 14:49:05.032894 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ae2c-account-create-update-rj9zh"] Feb 26 14:49:05 crc kubenswrapper[4809]: I0226 14:49:05.044624 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ae2c-account-create-update-rj9zh"] Feb 26 14:49:06 crc kubenswrapper[4809]: I0226 14:49:06.271517 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c868b6d6-47c7-45db-bfb1-f24b55ce40df" path="/var/lib/kubelet/pods/c868b6d6-47c7-45db-bfb1-f24b55ce40df/volumes" Feb 26 14:49:07 crc kubenswrapper[4809]: I0226 14:49:07.833175 4809 scope.go:117] "RemoveContainer" containerID="933197a4c7afe8168d9b3cc7c49bd43aa861001d37bdd49b13c11e512ab6feb7" Feb 26 14:49:07 crc kubenswrapper[4809]: I0226 14:49:07.879346 4809 scope.go:117] "RemoveContainer" containerID="a7ac3d8450e007489498245c64f81d771c859903399d2a8df5eb43d65ecc1558" Feb 26 14:49:07 crc kubenswrapper[4809]: I0226 14:49:07.945072 4809 scope.go:117] "RemoveContainer" containerID="ae2629ed7db0eab068b1526f32757812a9ac16cc6106f5d8d08141813780c33d" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.011494 4809 scope.go:117] "RemoveContainer" containerID="082bd640c401791a3d397d10b7b862631670ae652a869c89275ce108288268c1" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.055526 4809 scope.go:117] "RemoveContainer" containerID="10b9c9d765f774e794cfb3a5b0980f96f1fdeef253f6bd2f14da259e47493751" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.080954 4809 scope.go:117] "RemoveContainer" containerID="e6757ab60601c78d658ee5bc813f0def24c54db3e5aa9da9128cbb3ad5212f92" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.103558 4809 scope.go:117] "RemoveContainer" containerID="a873706770e4266a295709528f29b42bf1ddea948366438c8eefd3720e2d4366" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.173286 4809 scope.go:117] "RemoveContainer" containerID="c608fd969da2349b9945cd8606af19ec50bd74bc663106e69e21a55976eb8b09" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.197679 4809 scope.go:117] "RemoveContainer" containerID="32cd197b16e8e3e17556b51755eaee999fa2571c99b868e9f2279bb59d34ac08" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.253619 4809 scope.go:117] "RemoveContainer" containerID="310847742afb45da31139ebe3f52266eff4054c69eb659b25ccf0c9ea77b9d49" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.276698 4809 scope.go:117] "RemoveContainer" containerID="b5b3868eb43bb1ff8325589f40d8f6f78d35d3f037948c9e646a9c9f7b11d32a" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.307342 4809 scope.go:117] "RemoveContainer" containerID="61e7b56a944495b0a5638199c9fcd4f7457b4303bbea0ab1aac9d1b3f606fa5e" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.330028 4809 scope.go:117] "RemoveContainer" containerID="4eb318db63aa29e504a7302a6a54c7b3bc31cb9b09982841b24d4f2423593da1" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.352670 4809 scope.go:117] "RemoveContainer" containerID="0ab02caa01c8c6e8651a90d4a6889c0fa43a7cd9121c00cdb2faa03b8ff377fb" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.375044 4809 scope.go:117] "RemoveContainer" containerID="c10e583565c23ff4b94a937d19cb8e51073698fc165b1d6f90699a5db11e26b9" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.401561 4809 scope.go:117] "RemoveContainer" containerID="74807e52271578a4863684dbcd4e63f60e4125ad52cb35b52d16cc7285a53f25" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.424671 4809 scope.go:117] "RemoveContainer" containerID="7cacc812556f836dc908844efbf858db3ac668a0cab8dacd865547954bf6603d" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.457308 4809 scope.go:117] "RemoveContainer" containerID="481a4e5754d12714d6896ac5aae8451ac71c85cf679ef4722ff91f8ec7d4773d" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.498440 4809 scope.go:117] "RemoveContainer" containerID="ce06c3686d21a4975055825abdae1ed5f64d537743a4ee4d347c5ac609a47dd3" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.522858 4809 scope.go:117] "RemoveContainer" containerID="ce975f1ea2c4def4a76ccea807f003154aac718656f993c1fa57764a13c8a4ad" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.555189 4809 scope.go:117] "RemoveContainer" containerID="79fd9991e5e2d68987ced9c013cf26f4c44ba2b57a23741e975d08d20f8cdaa1" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.590417 4809 scope.go:117] "RemoveContainer" containerID="05ff486b6a4b70f6a8467d6fd90590cb9aad27c45ea430ede2b95067360de5a7" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.614127 4809 scope.go:117] "RemoveContainer" containerID="bec3a9c7312aad1a4125315d8b0074291e0fa641ebd6763f31359d19e71a3945" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.638965 4809 scope.go:117] "RemoveContainer" containerID="dec548d58c17f7a898778ef58a347e6bea17538d48c008511e91f8204e2efb7b" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.665427 4809 scope.go:117] "RemoveContainer" containerID="db56aa97ad19933aaa200d9d9a1d69cccc65324ec4b4bd6b99347587546e80ae" Feb 26 14:49:08 crc kubenswrapper[4809]: I0226 14:49:08.696992 4809 scope.go:117] "RemoveContainer" containerID="5384b7a15fa5b1b0743e6a9c331659eee59fc3e23b20e17e4b965189614b5e49" Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.069866 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf"] Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.082182 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-fbdc-account-create-update-8z2q5"] Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.093164 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5zxhf"] Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.105116 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-fbdc-account-create-update-8z2q5"] Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.282500 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ca006a-76e9-4923-b437-9574f83a33ec" path="/var/lib/kubelet/pods/08ca006a-76e9-4923-b437-9574f83a33ec/volumes" Feb 26 14:49:12 crc kubenswrapper[4809]: I0226 14:49:12.283539 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="940c7f45-d4db-4915-9e05-b3d6be8cbc8a" path="/var/lib/kubelet/pods/940c7f45-d4db-4915-9e05-b3d6be8cbc8a/volumes" Feb 26 14:49:28 crc kubenswrapper[4809]: I0226 14:49:28.051622 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pdkz9"] Feb 26 14:49:28 crc kubenswrapper[4809]: I0226 14:49:28.090681 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pdkz9"] Feb 26 14:49:28 crc kubenswrapper[4809]: I0226 14:49:28.273541 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe49627e-5430-4a47-b96d-cd756aecfc5c" path="/var/lib/kubelet/pods/fe49627e-5430-4a47-b96d-cd756aecfc5c/volumes" Feb 26 14:49:31 crc kubenswrapper[4809]: I0226 14:49:31.036098 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ztvfb"] Feb 26 14:49:31 crc kubenswrapper[4809]: I0226 14:49:31.049528 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ztvfb"] Feb 26 14:49:32 crc kubenswrapper[4809]: I0226 14:49:32.279353 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32a25c3-1275-463a-bfca-f7cac13c5048" path="/var/lib/kubelet/pods/d32a25c3-1275-463a-bfca-f7cac13c5048/volumes" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.134124 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:33 crc kubenswrapper[4809]: E0226 14:49:33.134688 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0ebdaa-3d31-4715-bb02-241b564ad69c" containerName="oc" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.134707 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0ebdaa-3d31-4715-bb02-241b564ad69c" containerName="oc" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.135531 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0ebdaa-3d31-4715-bb02-241b564ad69c" containerName="oc" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.143413 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.159067 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.229927 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xfq\" (UniqueName: \"kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.230331 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.230485 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.332776 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.333401 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.333690 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.334030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.334209 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xfq\" (UniqueName: \"kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.354826 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xfq\" (UniqueName: \"kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq\") pod \"redhat-marketplace-gnxwb\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:33 crc kubenswrapper[4809]: I0226 14:49:33.469350 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:34 crc kubenswrapper[4809]: I0226 14:49:34.069552 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:34 crc kubenswrapper[4809]: W0226 14:49:34.070769 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc662a18_5040_43c6_bb62_3832f19bb5ef.slice/crio-4e3b56e2d37518d39fd49fc9e2d11a012b39245337456811c3b4d06a36513303 WatchSource:0}: Error finding container 4e3b56e2d37518d39fd49fc9e2d11a012b39245337456811c3b4d06a36513303: Status 404 returned error can't find the container with id 4e3b56e2d37518d39fd49fc9e2d11a012b39245337456811c3b4d06a36513303 Feb 26 14:49:35 crc kubenswrapper[4809]: I0226 14:49:35.063970 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerID="68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a" exitCode=0 Feb 26 14:49:35 crc kubenswrapper[4809]: I0226 14:49:35.064050 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerDied","Data":"68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a"} Feb 26 14:49:35 crc kubenswrapper[4809]: I0226 14:49:35.064455 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerStarted","Data":"4e3b56e2d37518d39fd49fc9e2d11a012b39245337456811c3b4d06a36513303"} Feb 26 14:49:35 crc kubenswrapper[4809]: I0226 14:49:35.066941 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:49:36 crc kubenswrapper[4809]: I0226 14:49:36.077909 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerStarted","Data":"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b"} Feb 26 14:49:38 crc kubenswrapper[4809]: I0226 14:49:38.105654 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerID="8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b" exitCode=0 Feb 26 14:49:38 crc kubenswrapper[4809]: I0226 14:49:38.105770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerDied","Data":"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b"} Feb 26 14:49:39 crc kubenswrapper[4809]: I0226 14:49:39.123490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerStarted","Data":"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901"} Feb 26 14:49:39 crc kubenswrapper[4809]: I0226 14:49:39.150171 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gnxwb" podStartSLOduration=2.714011711 podStartE2EDuration="6.150148471s" podCreationTimestamp="2026-02-26 14:49:33 +0000 UTC" firstStartedPulling="2026-02-26 14:49:35.066621668 +0000 UTC m=+2153.539942191" lastFinishedPulling="2026-02-26 14:49:38.502758418 +0000 UTC m=+2156.976078951" observedRunningTime="2026-02-26 14:49:39.148916856 +0000 UTC m=+2157.622237379" watchObservedRunningTime="2026-02-26 14:49:39.150148471 +0000 UTC m=+2157.623469004" Feb 26 14:49:43 crc kubenswrapper[4809]: I0226 14:49:43.470010 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:43 crc kubenswrapper[4809]: I0226 14:49:43.470377 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:43 crc kubenswrapper[4809]: I0226 14:49:43.548062 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:44 crc kubenswrapper[4809]: I0226 14:49:44.275104 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:44 crc kubenswrapper[4809]: I0226 14:49:44.357606 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.211184 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gnxwb" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="registry-server" containerID="cri-o://ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901" gracePeriod=2 Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.792372 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.881470 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content\") pod \"bc662a18-5040-43c6-bb62-3832f19bb5ef\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.881577 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8xfq\" (UniqueName: \"kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq\") pod \"bc662a18-5040-43c6-bb62-3832f19bb5ef\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.881661 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities\") pod \"bc662a18-5040-43c6-bb62-3832f19bb5ef\" (UID: \"bc662a18-5040-43c6-bb62-3832f19bb5ef\") " Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.882855 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities" (OuterVolumeSpecName: "utilities") pod "bc662a18-5040-43c6-bb62-3832f19bb5ef" (UID: "bc662a18-5040-43c6-bb62-3832f19bb5ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.888107 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq" (OuterVolumeSpecName: "kube-api-access-s8xfq") pod "bc662a18-5040-43c6-bb62-3832f19bb5ef" (UID: "bc662a18-5040-43c6-bb62-3832f19bb5ef"). InnerVolumeSpecName "kube-api-access-s8xfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.916978 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc662a18-5040-43c6-bb62-3832f19bb5ef" (UID: "bc662a18-5040-43c6-bb62-3832f19bb5ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.984247 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.984493 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8xfq\" (UniqueName: \"kubernetes.io/projected/bc662a18-5040-43c6-bb62-3832f19bb5ef-kube-api-access-s8xfq\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:46 crc kubenswrapper[4809]: I0226 14:49:46.984572 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc662a18-5040-43c6-bb62-3832f19bb5ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.227646 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerID="ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901" exitCode=0 Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.227695 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerDied","Data":"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901"} Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.227724 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnxwb" event={"ID":"bc662a18-5040-43c6-bb62-3832f19bb5ef","Type":"ContainerDied","Data":"4e3b56e2d37518d39fd49fc9e2d11a012b39245337456811c3b4d06a36513303"} Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.227743 4809 scope.go:117] "RemoveContainer" containerID="ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.227745 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnxwb" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.272261 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.275791 4809 scope.go:117] "RemoveContainer" containerID="8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.292676 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnxwb"] Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.305446 4809 scope.go:117] "RemoveContainer" containerID="68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.361396 4809 scope.go:117] "RemoveContainer" containerID="ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901" Feb 26 14:49:47 crc kubenswrapper[4809]: E0226 14:49:47.362157 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901\": container with ID starting with ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901 not found: ID does not exist" containerID="ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.362206 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901"} err="failed to get container status \"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901\": rpc error: code = NotFound desc = could not find container \"ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901\": container with ID starting with ec43075d0f9ee0a4844c723a380578577ef4e9525e54ecd6312c5e3aec4e3901 not found: ID does not exist" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.362235 4809 scope.go:117] "RemoveContainer" containerID="8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b" Feb 26 14:49:47 crc kubenswrapper[4809]: E0226 14:49:47.362756 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b\": container with ID starting with 8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b not found: ID does not exist" containerID="8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.362799 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b"} err="failed to get container status \"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b\": rpc error: code = NotFound desc = could not find container \"8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b\": container with ID starting with 8401a8026ea80e7d781d7bbf7f1259f3a747a91210c5e97f8dabd1668031fa2b not found: ID does not exist" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.362844 4809 scope.go:117] "RemoveContainer" containerID="68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a" Feb 26 14:49:47 crc kubenswrapper[4809]: E0226 14:49:47.363605 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a\": container with ID starting with 68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a not found: ID does not exist" containerID="68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a" Feb 26 14:49:47 crc kubenswrapper[4809]: I0226 14:49:47.363656 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a"} err="failed to get container status \"68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a\": rpc error: code = NotFound desc = could not find container \"68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a\": container with ID starting with 68dbd91b187e885b7bb99a9348938edaec8dfa80492dfc2f953aad81f62c274a not found: ID does not exist" Feb 26 14:49:48 crc kubenswrapper[4809]: I0226 14:49:48.272585 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" path="/var/lib/kubelet/pods/bc662a18-5040-43c6-bb62-3832f19bb5ef/volumes" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.160199 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535290-sqr6p"] Feb 26 14:50:00 crc kubenswrapper[4809]: E0226 14:50:00.161415 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.161437 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4809]: E0226 14:50:00.161509 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="extract-content" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.161522 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="extract-content" Feb 26 14:50:00 crc kubenswrapper[4809]: E0226 14:50:00.161543 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="extract-utilities" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.161554 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="extract-utilities" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.161955 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc662a18-5040-43c6-bb62-3832f19bb5ef" containerName="registry-server" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.163083 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.166514 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.167254 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.172342 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.174671 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-sqr6p"] Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.240254 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzkq\" (UniqueName: \"kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq\") pod \"auto-csr-approver-29535290-sqr6p\" (UID: \"eb08ae20-f6b0-489a-a408-5efee2ec79f0\") " pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.342708 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzkq\" (UniqueName: \"kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq\") pod \"auto-csr-approver-29535290-sqr6p\" (UID: \"eb08ae20-f6b0-489a-a408-5efee2ec79f0\") " pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.375765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzkq\" (UniqueName: \"kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq\") pod \"auto-csr-approver-29535290-sqr6p\" (UID: \"eb08ae20-f6b0-489a-a408-5efee2ec79f0\") " pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.493070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:00 crc kubenswrapper[4809]: I0226 14:50:00.991265 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-sqr6p"] Feb 26 14:50:01 crc kubenswrapper[4809]: I0226 14:50:01.398457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" event={"ID":"eb08ae20-f6b0-489a-a408-5efee2ec79f0","Type":"ContainerStarted","Data":"243a14ad57b7b2bcf5d51a3878f4c8ec7e9345bfa981fcbc93cfe84bcd14ec92"} Feb 26 14:50:03 crc kubenswrapper[4809]: I0226 14:50:03.430109 4809 generic.go:334] "Generic (PLEG): container finished" podID="eb08ae20-f6b0-489a-a408-5efee2ec79f0" containerID="78cf1dc9392607eeb8a3ca38c10fd2d791168d9e1b5f8c74fb27ddea1850c48a" exitCode=0 Feb 26 14:50:03 crc kubenswrapper[4809]: I0226 14:50:03.430642 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" event={"ID":"eb08ae20-f6b0-489a-a408-5efee2ec79f0","Type":"ContainerDied","Data":"78cf1dc9392607eeb8a3ca38c10fd2d791168d9e1b5f8c74fb27ddea1850c48a"} Feb 26 14:50:04 crc kubenswrapper[4809]: I0226 14:50:04.923900 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.073463 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbzkq\" (UniqueName: \"kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq\") pod \"eb08ae20-f6b0-489a-a408-5efee2ec79f0\" (UID: \"eb08ae20-f6b0-489a-a408-5efee2ec79f0\") " Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.078737 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq" (OuterVolumeSpecName: "kube-api-access-dbzkq") pod "eb08ae20-f6b0-489a-a408-5efee2ec79f0" (UID: "eb08ae20-f6b0-489a-a408-5efee2ec79f0"). InnerVolumeSpecName "kube-api-access-dbzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.176756 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbzkq\" (UniqueName: \"kubernetes.io/projected/eb08ae20-f6b0-489a-a408-5efee2ec79f0-kube-api-access-dbzkq\") on node \"crc\" DevicePath \"\"" Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.456809 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" event={"ID":"eb08ae20-f6b0-489a-a408-5efee2ec79f0","Type":"ContainerDied","Data":"243a14ad57b7b2bcf5d51a3878f4c8ec7e9345bfa981fcbc93cfe84bcd14ec92"} Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.456862 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243a14ad57b7b2bcf5d51a3878f4c8ec7e9345bfa981fcbc93cfe84bcd14ec92" Feb 26 14:50:05 crc kubenswrapper[4809]: I0226 14:50:05.456868 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535290-sqr6p" Feb 26 14:50:06 crc kubenswrapper[4809]: I0226 14:50:06.008303 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-wkthg"] Feb 26 14:50:06 crc kubenswrapper[4809]: I0226 14:50:06.020922 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535284-wkthg"] Feb 26 14:50:06 crc kubenswrapper[4809]: I0226 14:50:06.273099 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b502e6-3aba-436c-b8c2-ef8a4d18e607" path="/var/lib/kubelet/pods/c1b502e6-3aba-436c-b8c2-ef8a4d18e607/volumes" Feb 26 14:50:09 crc kubenswrapper[4809]: I0226 14:50:09.113914 4809 scope.go:117] "RemoveContainer" containerID="27650a03bfc0d833f9dbaa1fa0b4182d8e149d2caa2a05e0e16dd92072d7f794" Feb 26 14:50:09 crc kubenswrapper[4809]: I0226 14:50:09.200933 4809 scope.go:117] "RemoveContainer" containerID="e6c0ec8f0111dc82c2daf78471e69a6620bcafa44077622c72efe0d81176524f" Feb 26 14:50:09 crc kubenswrapper[4809]: I0226 14:50:09.233068 4809 scope.go:117] "RemoveContainer" containerID="317e6bc04178519de1bfc2b4221d35eed35e5c95b47a8a05a6eb42d3c2b9e248" Feb 26 14:50:09 crc kubenswrapper[4809]: I0226 14:50:09.287570 4809 scope.go:117] "RemoveContainer" containerID="19ab8b037f2b87d4beecc408860cd0fd4ae9e264b40cac919dd97b9734f516de" Feb 26 14:50:09 crc kubenswrapper[4809]: I0226 14:50:09.344456 4809 scope.go:117] "RemoveContainer" containerID="23f18511af554514f384b686cc2c5edac94af249eaf6a6c51bf3cfd551224bdc" Feb 26 14:50:11 crc kubenswrapper[4809]: I0226 14:50:11.040993 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-sdgpk"] Feb 26 14:50:11 crc kubenswrapper[4809]: I0226 14:50:11.056720 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-sdgpk"] Feb 26 14:50:12 crc kubenswrapper[4809]: I0226 14:50:12.045239 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-x42ls"] Feb 26 14:50:12 crc kubenswrapper[4809]: I0226 14:50:12.063567 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-x42ls"] Feb 26 14:50:12 crc kubenswrapper[4809]: I0226 14:50:12.271202 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8" path="/var/lib/kubelet/pods/3bb17c1c-e42a-4ba4-83da-2e2845b5f3d8/volumes" Feb 26 14:50:12 crc kubenswrapper[4809]: I0226 14:50:12.272821 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cac2949-71b1-417b-b184-e890f4a309ad" path="/var/lib/kubelet/pods/8cac2949-71b1-417b-b184-e890f4a309ad/volumes" Feb 26 14:50:14 crc kubenswrapper[4809]: I0226 14:50:14.043568 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rhm7x"] Feb 26 14:50:14 crc kubenswrapper[4809]: I0226 14:50:14.058875 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rhm7x"] Feb 26 14:50:14 crc kubenswrapper[4809]: I0226 14:50:14.282166 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703ff5d0-61b5-407c-b4de-b163668a8851" path="/var/lib/kubelet/pods/703ff5d0-61b5-407c-b4de-b163668a8851/volumes" Feb 26 14:50:41 crc kubenswrapper[4809]: I0226 14:50:41.794161 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:50:41 crc kubenswrapper[4809]: I0226 14:50:41.794813 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:50:44 crc kubenswrapper[4809]: I0226 14:50:44.057234 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g49c6"] Feb 26 14:50:44 crc kubenswrapper[4809]: I0226 14:50:44.067977 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g49c6"] Feb 26 14:50:44 crc kubenswrapper[4809]: I0226 14:50:44.274779 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9" path="/var/lib/kubelet/pods/fd8eba85-e3ec-4b38-9e5f-7e5af79b93d9/volumes" Feb 26 14:50:58 crc kubenswrapper[4809]: I0226 14:50:58.115224 4809 generic.go:334] "Generic (PLEG): container finished" podID="ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" containerID="7c53292a42b710e6609f30b613b537b6acaa73b3dc8b0d12c774e79618ea8f23" exitCode=0 Feb 26 14:50:58 crc kubenswrapper[4809]: I0226 14:50:58.115712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" event={"ID":"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47","Type":"ContainerDied","Data":"7c53292a42b710e6609f30b613b537b6acaa73b3dc8b0d12c774e79618ea8f23"} Feb 26 14:50:58 crc kubenswrapper[4809]: E0226 14:50:58.316202 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2d7dc7_59ac_4b40_9a53_6f1a26eceb47.slice/crio-conmon-7c53292a42b710e6609f30b613b537b6acaa73b3dc8b0d12c774e79618ea8f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2d7dc7_59ac_4b40_9a53_6f1a26eceb47.slice/crio-7c53292a42b710e6609f30b613b537b6acaa73b3dc8b0d12c774e79618ea8f23.scope\": RecentStats: unable to find data in memory cache]" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.695889 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.886105 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam\") pod \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.886169 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle\") pod \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.886308 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory\") pod \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.886410 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwgt9\" (UniqueName: \"kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9\") pod \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\" (UID: \"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47\") " Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.894830 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9" (OuterVolumeSpecName: "kube-api-access-bwgt9") pod "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" (UID: "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47"). InnerVolumeSpecName "kube-api-access-bwgt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.896270 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" (UID: "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.919876 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory" (OuterVolumeSpecName: "inventory") pod "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" (UID: "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.920800 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" (UID: "ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.995091 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.995448 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwgt9\" (UniqueName: \"kubernetes.io/projected/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-kube-api-access-bwgt9\") on node \"crc\" DevicePath \"\"" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.995464 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:50:59 crc kubenswrapper[4809]: I0226 14:50:59.995478 4809 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.140298 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" event={"ID":"ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47","Type":"ContainerDied","Data":"c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23"} Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.140340 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31f50291a589f296f4f9b6c9654f1c45763b89ffd14f9d92a949801d012ba23" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.140392 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.242491 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm"] Feb 26 14:51:00 crc kubenswrapper[4809]: E0226 14:51:00.242956 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.242970 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 14:51:00 crc kubenswrapper[4809]: E0226 14:51:00.242992 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08ae20-f6b0-489a-a408-5efee2ec79f0" containerName="oc" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.242998 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08ae20-f6b0-489a-a408-5efee2ec79f0" containerName="oc" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.243218 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08ae20-f6b0-489a-a408-5efee2ec79f0" containerName="oc" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.243239 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.243967 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.246489 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.247363 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.247762 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.249579 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.255051 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm"] Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.432436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddr75\" (UniqueName: \"kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.432647 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.432717 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.534593 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.534692 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.534754 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddr75\" (UniqueName: \"kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.539955 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.540690 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.556240 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddr75\" (UniqueName: \"kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:00 crc kubenswrapper[4809]: I0226 14:51:00.597024 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:51:01 crc kubenswrapper[4809]: I0226 14:51:01.153408 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm"] Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.052382 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b7cnn"] Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.068632 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b7cnn"] Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.167102 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" event={"ID":"f33dc9c7-e973-434a-96c2-6712074b3ef8","Type":"ContainerStarted","Data":"39b578f5978adcb5b88f90a52fc62a65eaeb042f3e07d49c534f997f363bdf86"} Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.167154 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" event={"ID":"f33dc9c7-e973-434a-96c2-6712074b3ef8","Type":"ContainerStarted","Data":"cd4ac4e4cd2001cef706d26bf0392115ee3c4cb93f48c7d5fcf9360fdeceec75"} Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.185913 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" podStartSLOduration=1.688674202 podStartE2EDuration="2.185897357s" podCreationTimestamp="2026-02-26 14:51:00 +0000 UTC" firstStartedPulling="2026-02-26 14:51:01.172339089 +0000 UTC m=+2239.645659612" lastFinishedPulling="2026-02-26 14:51:01.669562244 +0000 UTC m=+2240.142882767" observedRunningTime="2026-02-26 14:51:02.183300333 +0000 UTC m=+2240.656620856" watchObservedRunningTime="2026-02-26 14:51:02.185897357 +0000 UTC m=+2240.659217880" Feb 26 14:51:02 crc kubenswrapper[4809]: I0226 14:51:02.271232 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06597a2e-41b4-4d56-bed1-0cb73516bee0" path="/var/lib/kubelet/pods/06597a2e-41b4-4d56-bed1-0cb73516bee0/volumes" Feb 26 14:51:09 crc kubenswrapper[4809]: I0226 14:51:09.529545 4809 scope.go:117] "RemoveContainer" containerID="f833bb0d99a2999ea15062758a6c24644e0775b1b30bb0681a40b8d788567bc0" Feb 26 14:51:09 crc kubenswrapper[4809]: I0226 14:51:09.560064 4809 scope.go:117] "RemoveContainer" containerID="b2b000b45403b605c4810921b54d45452882eaa554acbc09360d4daab27ef554" Feb 26 14:51:09 crc kubenswrapper[4809]: I0226 14:51:09.627331 4809 scope.go:117] "RemoveContainer" containerID="0801b838d74337a799a33972e83e803fd41c03dac85dc4c693a1cb6db903f81d" Feb 26 14:51:09 crc kubenswrapper[4809]: I0226 14:51:09.695134 4809 scope.go:117] "RemoveContainer" containerID="a951219fdc2d9e5434d52ccc402f1c9691290b16f1d5fab63fe961e081b6e8d7" Feb 26 14:51:09 crc kubenswrapper[4809]: I0226 14:51:09.754848 4809 scope.go:117] "RemoveContainer" containerID="e69cdd804bb65af9a22abbef7a0e13f47df8570dbf3f30ae90cf74c365ada1a9" Feb 26 14:51:11 crc kubenswrapper[4809]: I0226 14:51:11.796455 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:51:11 crc kubenswrapper[4809]: I0226 14:51:11.796862 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:51:41 crc kubenswrapper[4809]: I0226 14:51:41.799575 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:51:41 crc kubenswrapper[4809]: I0226 14:51:41.800127 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:51:41 crc kubenswrapper[4809]: I0226 14:51:41.800181 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:51:41 crc kubenswrapper[4809]: I0226 14:51:41.801206 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:51:41 crc kubenswrapper[4809]: I0226 14:51:41.801339 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3" gracePeriod=600 Feb 26 14:51:42 crc kubenswrapper[4809]: I0226 14:51:42.708427 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3" exitCode=0 Feb 26 14:51:42 crc kubenswrapper[4809]: I0226 14:51:42.708505 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3"} Feb 26 14:51:42 crc kubenswrapper[4809]: I0226 14:51:42.708994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79"} Feb 26 14:51:42 crc kubenswrapper[4809]: I0226 14:51:42.709073 4809 scope.go:117] "RemoveContainer" containerID="f17cf05c5152683d109ef735ff87a04a05922ebc89b2270017ff4fdc2b504155" Feb 26 14:51:49 crc kubenswrapper[4809]: I0226 14:51:49.083084 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b89wr"] Feb 26 14:51:49 crc kubenswrapper[4809]: I0226 14:51:49.102696 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b89wr"] Feb 26 14:51:50 crc kubenswrapper[4809]: I0226 14:51:50.271068 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf13b0e-9265-48c1-830b-8f0e59578fcf" path="/var/lib/kubelet/pods/ddf13b0e-9265-48c1-830b-8f0e59578fcf/volumes" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.155427 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535292-kt9rn"] Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.159435 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.165424 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-kt9rn"] Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.168457 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.168818 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.172059 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.210190 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjjn\" (UniqueName: \"kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn\") pod \"auto-csr-approver-29535292-kt9rn\" (UID: \"52890012-1f14-4113-b279-fd2a240978da\") " pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.314093 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjjn\" (UniqueName: \"kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn\") pod \"auto-csr-approver-29535292-kt9rn\" (UID: \"52890012-1f14-4113-b279-fd2a240978da\") " pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.338879 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjjn\" (UniqueName: \"kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn\") pod \"auto-csr-approver-29535292-kt9rn\" (UID: \"52890012-1f14-4113-b279-fd2a240978da\") " pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.481207 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.843834 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-kt9rn"] Feb 26 14:52:00 crc kubenswrapper[4809]: W0226 14:52:00.846058 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52890012_1f14_4113_b279_fd2a240978da.slice/crio-56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236 WatchSource:0}: Error finding container 56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236: Status 404 returned error can't find the container with id 56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236 Feb 26 14:52:00 crc kubenswrapper[4809]: I0226 14:52:00.945890 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" event={"ID":"52890012-1f14-4113-b279-fd2a240978da","Type":"ContainerStarted","Data":"56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236"} Feb 26 14:52:02 crc kubenswrapper[4809]: I0226 14:52:02.978859 4809 generic.go:334] "Generic (PLEG): container finished" podID="52890012-1f14-4113-b279-fd2a240978da" containerID="7121226dda8ab5e6a3693fc5102ef7f1c0ebbaa0033d462a05b876ee8af27d58" exitCode=0 Feb 26 14:52:02 crc kubenswrapper[4809]: I0226 14:52:02.979155 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" event={"ID":"52890012-1f14-4113-b279-fd2a240978da","Type":"ContainerDied","Data":"7121226dda8ab5e6a3693fc5102ef7f1c0ebbaa0033d462a05b876ee8af27d58"} Feb 26 14:52:04 crc kubenswrapper[4809]: I0226 14:52:04.453458 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:04 crc kubenswrapper[4809]: I0226 14:52:04.630396 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhjjn\" (UniqueName: \"kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn\") pod \"52890012-1f14-4113-b279-fd2a240978da\" (UID: \"52890012-1f14-4113-b279-fd2a240978da\") " Feb 26 14:52:04 crc kubenswrapper[4809]: I0226 14:52:04.638850 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn" (OuterVolumeSpecName: "kube-api-access-fhjjn") pod "52890012-1f14-4113-b279-fd2a240978da" (UID: "52890012-1f14-4113-b279-fd2a240978da"). InnerVolumeSpecName "kube-api-access-fhjjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:52:04 crc kubenswrapper[4809]: I0226 14:52:04.734393 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhjjn\" (UniqueName: \"kubernetes.io/projected/52890012-1f14-4113-b279-fd2a240978da-kube-api-access-fhjjn\") on node \"crc\" DevicePath \"\"" Feb 26 14:52:05 crc kubenswrapper[4809]: I0226 14:52:05.005273 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" event={"ID":"52890012-1f14-4113-b279-fd2a240978da","Type":"ContainerDied","Data":"56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236"} Feb 26 14:52:05 crc kubenswrapper[4809]: I0226 14:52:05.005311 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56177d0b7b7f2184f3fb18cebd87d2842f43b07946a8d926a4f4a89b1b880236" Feb 26 14:52:05 crc kubenswrapper[4809]: I0226 14:52:05.005343 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535292-kt9rn" Feb 26 14:52:05 crc kubenswrapper[4809]: I0226 14:52:05.531088 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-gswpg"] Feb 26 14:52:05 crc kubenswrapper[4809]: I0226 14:52:05.543275 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535286-gswpg"] Feb 26 14:52:06 crc kubenswrapper[4809]: I0226 14:52:06.279892 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345b13de-06f8-47c7-a9e4-e18fa30835a3" path="/var/lib/kubelet/pods/345b13de-06f8-47c7-a9e4-e18fa30835a3/volumes" Feb 26 14:52:09 crc kubenswrapper[4809]: I0226 14:52:09.957093 4809 scope.go:117] "RemoveContainer" containerID="b23687ee1125fe608d3e2e63998130bf767040e78c1dcf963247917d4da77d97" Feb 26 14:52:12 crc kubenswrapper[4809]: I0226 14:52:12.359706 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dfdt2"] Feb 26 14:52:12 crc kubenswrapper[4809]: I0226 14:52:12.372947 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dfdt2"] Feb 26 14:52:13 crc kubenswrapper[4809]: I0226 14:52:13.046133 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-t4dwk"] Feb 26 14:52:13 crc kubenswrapper[4809]: I0226 14:52:13.058638 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8p8fn"] Feb 26 14:52:13 crc kubenswrapper[4809]: I0226 14:52:13.069587 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-t4dwk"] Feb 26 14:52:13 crc kubenswrapper[4809]: I0226 14:52:13.081466 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8p8fn"] Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.036584 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b0e9-account-create-update-krhvv"] Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.051646 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b0e9-account-create-update-krhvv"] Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.280723 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798b5cff-a67c-41a5-9252-d8bda45c5f89" path="/var/lib/kubelet/pods/798b5cff-a67c-41a5-9252-d8bda45c5f89/volumes" Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.282732 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12667f1-2d2a-426a-b085-4492d1f57c82" path="/var/lib/kubelet/pods/a12667f1-2d2a-426a-b085-4492d1f57c82/volumes" Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.284284 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3874bc2-4abf-4fb1-9149-ea5cefcf3f70" path="/var/lib/kubelet/pods/c3874bc2-4abf-4fb1-9149-ea5cefcf3f70/volumes" Feb 26 14:52:14 crc kubenswrapper[4809]: I0226 14:52:14.285819 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4e7da1-aac7-4512-9c26-e948c0fa8e29" path="/var/lib/kubelet/pods/fe4e7da1-aac7-4512-9c26-e948c0fa8e29/volumes" Feb 26 14:52:15 crc kubenswrapper[4809]: I0226 14:52:15.042685 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-936d-account-create-update-ll7mb"] Feb 26 14:52:15 crc kubenswrapper[4809]: I0226 14:52:15.060923 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-936d-account-create-update-ll7mb"] Feb 26 14:52:15 crc kubenswrapper[4809]: I0226 14:52:15.074790 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ab1d-account-create-update-pzttk"] Feb 26 14:52:15 crc kubenswrapper[4809]: I0226 14:52:15.083962 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ab1d-account-create-update-pzttk"] Feb 26 14:52:16 crc kubenswrapper[4809]: I0226 14:52:16.307773 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f72bd78-24ed-4a32-920e-1720c64a2ad3" path="/var/lib/kubelet/pods/9f72bd78-24ed-4a32-920e-1720c64a2ad3/volumes" Feb 26 14:52:16 crc kubenswrapper[4809]: I0226 14:52:16.310772 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84a8afd-6b9c-4a60-9d4a-3110f1f72045" path="/var/lib/kubelet/pods/d84a8afd-6b9c-4a60-9d4a-3110f1f72045/volumes" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.896931 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:19 crc kubenswrapper[4809]: E0226 14:52:19.898163 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52890012-1f14-4113-b279-fd2a240978da" containerName="oc" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.898188 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="52890012-1f14-4113-b279-fd2a240978da" containerName="oc" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.898621 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="52890012-1f14-4113-b279-fd2a240978da" containerName="oc" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.902098 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.912478 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.970610 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764fb\" (UniqueName: \"kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.970788 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:19 crc kubenswrapper[4809]: I0226 14:52:19.971063 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.073788 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764fb\" (UniqueName: \"kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.073869 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.073933 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.074486 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.074620 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.096535 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764fb\" (UniqueName: \"kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb\") pod \"community-operators-zkz4g\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.234580 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:20 crc kubenswrapper[4809]: I0226 14:52:20.902248 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:21 crc kubenswrapper[4809]: I0226 14:52:21.490339 4809 generic.go:334] "Generic (PLEG): container finished" podID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerID="71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794" exitCode=0 Feb 26 14:52:21 crc kubenswrapper[4809]: I0226 14:52:21.490661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerDied","Data":"71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794"} Feb 26 14:52:21 crc kubenswrapper[4809]: I0226 14:52:21.490702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerStarted","Data":"5faf6b0d063f487e968e66101925182414684e205fe0310fde3380b24ae2c8ec"} Feb 26 14:52:22 crc kubenswrapper[4809]: I0226 14:52:22.515454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerStarted","Data":"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b"} Feb 26 14:52:24 crc kubenswrapper[4809]: I0226 14:52:24.586396 4809 generic.go:334] "Generic (PLEG): container finished" podID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerID="0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b" exitCode=0 Feb 26 14:52:24 crc kubenswrapper[4809]: I0226 14:52:24.586588 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerDied","Data":"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b"} Feb 26 14:52:25 crc kubenswrapper[4809]: I0226 14:52:25.599961 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerStarted","Data":"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561"} Feb 26 14:52:25 crc kubenswrapper[4809]: I0226 14:52:25.627754 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zkz4g" podStartSLOduration=3.129842382 podStartE2EDuration="6.627737373s" podCreationTimestamp="2026-02-26 14:52:19 +0000 UTC" firstStartedPulling="2026-02-26 14:52:21.492729743 +0000 UTC m=+2319.966050296" lastFinishedPulling="2026-02-26 14:52:24.990624754 +0000 UTC m=+2323.463945287" observedRunningTime="2026-02-26 14:52:25.623545092 +0000 UTC m=+2324.096865615" watchObservedRunningTime="2026-02-26 14:52:25.627737373 +0000 UTC m=+2324.101057896" Feb 26 14:52:30 crc kubenswrapper[4809]: I0226 14:52:30.234989 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:30 crc kubenswrapper[4809]: I0226 14:52:30.235596 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:30 crc kubenswrapper[4809]: I0226 14:52:30.303173 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:30 crc kubenswrapper[4809]: I0226 14:52:30.738723 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:30 crc kubenswrapper[4809]: I0226 14:52:30.866030 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:32 crc kubenswrapper[4809]: I0226 14:52:32.684821 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zkz4g" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="registry-server" containerID="cri-o://8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561" gracePeriod=2 Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.231474 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.318772 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-764fb\" (UniqueName: \"kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb\") pod \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.318912 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content\") pod \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.319171 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities\") pod \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\" (UID: \"3ca975d4-d34e-40a1-ac31-1c75eff86b65\") " Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.321837 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities" (OuterVolumeSpecName: "utilities") pod "3ca975d4-d34e-40a1-ac31-1c75eff86b65" (UID: "3ca975d4-d34e-40a1-ac31-1c75eff86b65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.329254 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb" (OuterVolumeSpecName: "kube-api-access-764fb") pod "3ca975d4-d34e-40a1-ac31-1c75eff86b65" (UID: "3ca975d4-d34e-40a1-ac31-1c75eff86b65"). InnerVolumeSpecName "kube-api-access-764fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.386590 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ca975d4-d34e-40a1-ac31-1c75eff86b65" (UID: "3ca975d4-d34e-40a1-ac31-1c75eff86b65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.424205 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-764fb\" (UniqueName: \"kubernetes.io/projected/3ca975d4-d34e-40a1-ac31-1c75eff86b65-kube-api-access-764fb\") on node \"crc\" DevicePath \"\"" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.424232 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.424242 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ca975d4-d34e-40a1-ac31-1c75eff86b65-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.700972 4809 generic.go:334] "Generic (PLEG): container finished" podID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerID="8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561" exitCode=0 Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.701075 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zkz4g" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.701094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerDied","Data":"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561"} Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.701166 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zkz4g" event={"ID":"3ca975d4-d34e-40a1-ac31-1c75eff86b65","Type":"ContainerDied","Data":"5faf6b0d063f487e968e66101925182414684e205fe0310fde3380b24ae2c8ec"} Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.701201 4809 scope.go:117] "RemoveContainer" containerID="8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.742153 4809 scope.go:117] "RemoveContainer" containerID="0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.744232 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.759000 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zkz4g"] Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.781524 4809 scope.go:117] "RemoveContainer" containerID="71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.855114 4809 scope.go:117] "RemoveContainer" containerID="8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561" Feb 26 14:52:33 crc kubenswrapper[4809]: E0226 14:52:33.856370 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561\": container with ID starting with 8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561 not found: ID does not exist" containerID="8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.856425 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561"} err="failed to get container status \"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561\": rpc error: code = NotFound desc = could not find container \"8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561\": container with ID starting with 8d6c878b8155142d098214b908b3269d2f8f4aac6fb092a9c2053723bed83561 not found: ID does not exist" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.856470 4809 scope.go:117] "RemoveContainer" containerID="0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b" Feb 26 14:52:33 crc kubenswrapper[4809]: E0226 14:52:33.857110 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b\": container with ID starting with 0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b not found: ID does not exist" containerID="0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.857164 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b"} err="failed to get container status \"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b\": rpc error: code = NotFound desc = could not find container \"0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b\": container with ID starting with 0e87794f95f2a9cec115ed0b6941f254df6d5610b0fc8a90b381d607ccfe649b not found: ID does not exist" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.857196 4809 scope.go:117] "RemoveContainer" containerID="71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794" Feb 26 14:52:33 crc kubenswrapper[4809]: E0226 14:52:33.857448 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794\": container with ID starting with 71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794 not found: ID does not exist" containerID="71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794" Feb 26 14:52:33 crc kubenswrapper[4809]: I0226 14:52:33.857477 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794"} err="failed to get container status \"71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794\": rpc error: code = NotFound desc = could not find container \"71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794\": container with ID starting with 71ee505c11958bb4f40810db30200c6ce66b2f896f4dafb739db733fe63c0794 not found: ID does not exist" Feb 26 14:52:34 crc kubenswrapper[4809]: I0226 14:52:34.281050 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" path="/var/lib/kubelet/pods/3ca975d4-d34e-40a1-ac31-1c75eff86b65/volumes" Feb 26 14:52:48 crc kubenswrapper[4809]: I0226 14:52:48.067543 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-r5jrr"] Feb 26 14:52:48 crc kubenswrapper[4809]: I0226 14:52:48.085778 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-r5jrr"] Feb 26 14:52:48 crc kubenswrapper[4809]: I0226 14:52:48.277712 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24e2cb6-58cc-407b-bc42-5d83d63a173d" path="/var/lib/kubelet/pods/b24e2cb6-58cc-407b-bc42-5d83d63a173d/volumes" Feb 26 14:52:49 crc kubenswrapper[4809]: I0226 14:52:49.058979 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-e6d8-account-create-update-xhq4l"] Feb 26 14:52:49 crc kubenswrapper[4809]: I0226 14:52:49.084430 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-e6d8-account-create-update-xhq4l"] Feb 26 14:52:50 crc kubenswrapper[4809]: I0226 14:52:50.271160 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7180709f-48cb-4863-95a6-61637c4508f8" path="/var/lib/kubelet/pods/7180709f-48cb-4863-95a6-61637c4508f8/volumes" Feb 26 14:52:51 crc kubenswrapper[4809]: I0226 14:52:51.032432 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jhstk"] Feb 26 14:52:51 crc kubenswrapper[4809]: I0226 14:52:51.047004 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jhstk"] Feb 26 14:52:52 crc kubenswrapper[4809]: I0226 14:52:52.276618 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4403ebd6-aa8d-4398-842e-f33ef09117cc" path="/var/lib/kubelet/pods/4403ebd6-aa8d-4398-842e-f33ef09117cc/volumes" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.056575 4809 scope.go:117] "RemoveContainer" containerID="13ee9164943c9856ea27b3ed3933e872246a20c9bfc8761744abb03a4b6fc089" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.100054 4809 scope.go:117] "RemoveContainer" containerID="f5650b06635b54b5e8be96da160f3bc46a3cdfc55cd0352966168cff4ba1c6d6" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.152354 4809 scope.go:117] "RemoveContainer" containerID="94f9b3222c66a5756d98428fc37dda8f3bfa83d31305e0867ff7dc7d3b48cb4f" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.209235 4809 scope.go:117] "RemoveContainer" containerID="496d06f2cb6a3d28ee0f975e72c008a207e63b6e165041b5498139f4d7c9ff8b" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.260717 4809 scope.go:117] "RemoveContainer" containerID="ef32fd0e816063c79286192f6bf6c6a22a5ac6e0afd1cdac59d27cdd89ea584a" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.326860 4809 scope.go:117] "RemoveContainer" containerID="b237b2a8df805508907caa6e357c4762fb3ff39dbf14a3f8eb3d3a1015aaa4b1" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.366294 4809 scope.go:117] "RemoveContainer" containerID="7fc2f67e0fb0dd5ccb07faa008722bed1e4077362207a9fe5d8b9366e09e024c" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.395900 4809 scope.go:117] "RemoveContainer" containerID="f35c4d90cfbb95f65e6ed2ad56b1628e96016d07869ce0dc6e2b9e1cae36d587" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.419756 4809 scope.go:117] "RemoveContainer" containerID="54700816eda36c542c9266977891ddb0d97193bd82d2a3ee9808db703cf4048d" Feb 26 14:53:10 crc kubenswrapper[4809]: I0226 14:53:10.467481 4809 scope.go:117] "RemoveContainer" containerID="c7d2b047a828a1773ee1adab26998056fdc37a9e1c69e6c8ad6dde24ffa29ba1" Feb 26 14:53:20 crc kubenswrapper[4809]: I0226 14:53:20.070985 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2vl8t"] Feb 26 14:53:20 crc kubenswrapper[4809]: I0226 14:53:20.080288 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2vl8t"] Feb 26 14:53:20 crc kubenswrapper[4809]: I0226 14:53:20.270448 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538bc3b6-9ed2-48da-8596-55ca0077a9df" path="/var/lib/kubelet/pods/538bc3b6-9ed2-48da-8596-55ca0077a9df/volumes" Feb 26 14:53:23 crc kubenswrapper[4809]: I0226 14:53:23.046114 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7nvf"] Feb 26 14:53:23 crc kubenswrapper[4809]: I0226 14:53:23.060249 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7nvf"] Feb 26 14:53:24 crc kubenswrapper[4809]: I0226 14:53:24.281500 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b753f14-9d84-40a0-963f-233a8d25d27f" path="/var/lib/kubelet/pods/3b753f14-9d84-40a0-963f-233a8d25d27f/volumes" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.590578 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njhx7"] Feb 26 14:53:26 crc kubenswrapper[4809]: E0226 14:53:26.591691 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="extract-content" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.591709 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="extract-content" Feb 26 14:53:26 crc kubenswrapper[4809]: E0226 14:53:26.591765 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="extract-utilities" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.591776 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="extract-utilities" Feb 26 14:53:26 crc kubenswrapper[4809]: E0226 14:53:26.591796 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="registry-server" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.591804 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="registry-server" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.592082 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca975d4-d34e-40a1-ac31-1c75eff86b65" containerName="registry-server" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.594205 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.618079 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njhx7"] Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.777921 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-catalog-content\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.778372 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdgm\" (UniqueName: \"kubernetes.io/projected/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-kube-api-access-njdgm\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.778603 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-utilities\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.880825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdgm\" (UniqueName: \"kubernetes.io/projected/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-kube-api-access-njdgm\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.880906 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-utilities\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.881070 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-catalog-content\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.881703 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-utilities\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.881730 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-catalog-content\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.902590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdgm\" (UniqueName: \"kubernetes.io/projected/f52e8302-5dc1-4b5d-b571-29bd5e69f6a6-kube-api-access-njdgm\") pod \"redhat-operators-njhx7\" (UID: \"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6\") " pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:26 crc kubenswrapper[4809]: I0226 14:53:26.933616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:27 crc kubenswrapper[4809]: I0226 14:53:27.418157 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njhx7"] Feb 26 14:53:28 crc kubenswrapper[4809]: I0226 14:53:28.439087 4809 generic.go:334] "Generic (PLEG): container finished" podID="f52e8302-5dc1-4b5d-b571-29bd5e69f6a6" containerID="4ea7a66091d785106dea2f4e8774a8fc0bff414c7a67810ea29f1be0b10126ee" exitCode=0 Feb 26 14:53:28 crc kubenswrapper[4809]: I0226 14:53:28.439152 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njhx7" event={"ID":"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6","Type":"ContainerDied","Data":"4ea7a66091d785106dea2f4e8774a8fc0bff414c7a67810ea29f1be0b10126ee"} Feb 26 14:53:28 crc kubenswrapper[4809]: I0226 14:53:28.439733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njhx7" event={"ID":"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6","Type":"ContainerStarted","Data":"303167ef5456125663cbc39ad5091bc1aff60143fdd4468dc567d53eb376f2b4"} Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.159629 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.164184 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.180505 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.283373 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lww\" (UniqueName: \"kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.283739 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.283843 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.386407 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.386474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.386645 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lww\" (UniqueName: \"kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.386959 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.387365 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.410320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lww\" (UniqueName: \"kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww\") pod \"certified-operators-pt92x\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:34 crc kubenswrapper[4809]: I0226 14:53:34.500090 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:38 crc kubenswrapper[4809]: I0226 14:53:38.084097 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:38 crc kubenswrapper[4809]: I0226 14:53:38.557936 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerStarted","Data":"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6"} Feb 26 14:53:38 crc kubenswrapper[4809]: I0226 14:53:38.557981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerStarted","Data":"eaeaeaece3466f7173135bd3632b662d7ccf2f6e427ef5462d4cc88f52739c8d"} Feb 26 14:53:38 crc kubenswrapper[4809]: I0226 14:53:38.560417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njhx7" event={"ID":"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6","Type":"ContainerStarted","Data":"f23bca681e04447832ea10c990ab812dd32eb7a73ffc1497895e28d501c5c743"} Feb 26 14:53:40 crc kubenswrapper[4809]: I0226 14:53:40.601341 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerID="96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6" exitCode=0 Feb 26 14:53:40 crc kubenswrapper[4809]: I0226 14:53:40.601431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerDied","Data":"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6"} Feb 26 14:53:41 crc kubenswrapper[4809]: I0226 14:53:41.616338 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerStarted","Data":"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522"} Feb 26 14:53:41 crc kubenswrapper[4809]: I0226 14:53:41.619464 4809 generic.go:334] "Generic (PLEG): container finished" podID="f52e8302-5dc1-4b5d-b571-29bd5e69f6a6" containerID="f23bca681e04447832ea10c990ab812dd32eb7a73ffc1497895e28d501c5c743" exitCode=0 Feb 26 14:53:41 crc kubenswrapper[4809]: I0226 14:53:41.620346 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njhx7" event={"ID":"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6","Type":"ContainerDied","Data":"f23bca681e04447832ea10c990ab812dd32eb7a73ffc1497895e28d501c5c743"} Feb 26 14:53:44 crc kubenswrapper[4809]: I0226 14:53:44.650758 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerID="8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522" exitCode=0 Feb 26 14:53:44 crc kubenswrapper[4809]: I0226 14:53:44.650814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerDied","Data":"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522"} Feb 26 14:53:44 crc kubenswrapper[4809]: I0226 14:53:44.654517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njhx7" event={"ID":"f52e8302-5dc1-4b5d-b571-29bd5e69f6a6","Type":"ContainerStarted","Data":"5b67665a07d77121bde1139e20011c8e4f0ac05cd7fbea7db91070b539698434"} Feb 26 14:53:44 crc kubenswrapper[4809]: I0226 14:53:44.698966 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njhx7" podStartSLOduration=3.712303735 podStartE2EDuration="18.698947257s" podCreationTimestamp="2026-02-26 14:53:26 +0000 UTC" firstStartedPulling="2026-02-26 14:53:28.44139756 +0000 UTC m=+2386.914718123" lastFinishedPulling="2026-02-26 14:53:43.428041122 +0000 UTC m=+2401.901361645" observedRunningTime="2026-02-26 14:53:44.690040072 +0000 UTC m=+2403.163360595" watchObservedRunningTime="2026-02-26 14:53:44.698947257 +0000 UTC m=+2403.172267780" Feb 26 14:53:45 crc kubenswrapper[4809]: I0226 14:53:45.668099 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerStarted","Data":"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8"} Feb 26 14:53:45 crc kubenswrapper[4809]: I0226 14:53:45.700650 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pt92x" podStartSLOduration=7.260551796 podStartE2EDuration="11.700630775s" podCreationTimestamp="2026-02-26 14:53:34 +0000 UTC" firstStartedPulling="2026-02-26 14:53:40.604083796 +0000 UTC m=+2399.077404329" lastFinishedPulling="2026-02-26 14:53:45.044162775 +0000 UTC m=+2403.517483308" observedRunningTime="2026-02-26 14:53:45.689157426 +0000 UTC m=+2404.162477969" watchObservedRunningTime="2026-02-26 14:53:45.700630775 +0000 UTC m=+2404.173951298" Feb 26 14:53:46 crc kubenswrapper[4809]: I0226 14:53:46.933922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:46 crc kubenswrapper[4809]: I0226 14:53:46.934180 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:47 crc kubenswrapper[4809]: I0226 14:53:47.991152 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-njhx7" podUID="f52e8302-5dc1-4b5d-b571-29bd5e69f6a6" containerName="registry-server" probeResult="failure" output=< Feb 26 14:53:47 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 14:53:47 crc kubenswrapper[4809]: > Feb 26 14:53:54 crc kubenswrapper[4809]: I0226 14:53:54.500338 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:54 crc kubenswrapper[4809]: I0226 14:53:54.501478 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:54 crc kubenswrapper[4809]: I0226 14:53:54.570192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:54 crc kubenswrapper[4809]: I0226 14:53:54.857799 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:54 crc kubenswrapper[4809]: I0226 14:53:54.946797 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:56 crc kubenswrapper[4809]: I0226 14:53:56.798066 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pt92x" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="registry-server" containerID="cri-o://5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8" gracePeriod=2 Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.014592 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.085824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njhx7" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.237928 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njhx7"] Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.362254 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.362762 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nw6lt" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="registry-server" containerID="cri-o://d6769f62d559f40396f585b9baf75e217820395d211a18697f6a90f4e7a80a47" gracePeriod=2 Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.657374 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.801764 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lww\" (UniqueName: \"kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww\") pod \"c4464776-3503-46ca-8bfb-0c963f5db40c\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.801815 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content\") pod \"c4464776-3503-46ca-8bfb-0c963f5db40c\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.801855 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities\") pod \"c4464776-3503-46ca-8bfb-0c963f5db40c\" (UID: \"c4464776-3503-46ca-8bfb-0c963f5db40c\") " Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.803228 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities" (OuterVolumeSpecName: "utilities") pod "c4464776-3503-46ca-8bfb-0c963f5db40c" (UID: "c4464776-3503-46ca-8bfb-0c963f5db40c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.813497 4809 generic.go:334] "Generic (PLEG): container finished" podID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerID="d6769f62d559f40396f585b9baf75e217820395d211a18697f6a90f4e7a80a47" exitCode=0 Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.813551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerDied","Data":"d6769f62d559f40396f585b9baf75e217820395d211a18697f6a90f4e7a80a47"} Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.817554 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww" (OuterVolumeSpecName: "kube-api-access-w9lww") pod "c4464776-3503-46ca-8bfb-0c963f5db40c" (UID: "c4464776-3503-46ca-8bfb-0c963f5db40c"). InnerVolumeSpecName "kube-api-access-w9lww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.827630 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerID="5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8" exitCode=0 Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.827709 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerDied","Data":"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8"} Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.827792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt92x" event={"ID":"c4464776-3503-46ca-8bfb-0c963f5db40c","Type":"ContainerDied","Data":"eaeaeaece3466f7173135bd3632b662d7ccf2f6e427ef5462d4cc88f52739c8d"} Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.827823 4809 scope.go:117] "RemoveContainer" containerID="5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.828239 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt92x" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.872052 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4464776-3503-46ca-8bfb-0c963f5db40c" (UID: "c4464776-3503-46ca-8bfb-0c963f5db40c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.898301 4809 scope.go:117] "RemoveContainer" containerID="8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.904572 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lww\" (UniqueName: \"kubernetes.io/projected/c4464776-3503-46ca-8bfb-0c963f5db40c-kube-api-access-w9lww\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.904605 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.904617 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4464776-3503-46ca-8bfb-0c963f5db40c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.922561 4809 scope.go:117] "RemoveContainer" containerID="96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.975842 4809 scope.go:117] "RemoveContainer" containerID="5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8" Feb 26 14:53:57 crc kubenswrapper[4809]: E0226 14:53:57.976321 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8\": container with ID starting with 5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8 not found: ID does not exist" containerID="5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.976368 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8"} err="failed to get container status \"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8\": rpc error: code = NotFound desc = could not find container \"5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8\": container with ID starting with 5a0b561a6b69e4aa7e30bdacd5910eeb63e195b05dacf9869753deacff853bd8 not found: ID does not exist" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.976397 4809 scope.go:117] "RemoveContainer" containerID="8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522" Feb 26 14:53:57 crc kubenswrapper[4809]: E0226 14:53:57.976998 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522\": container with ID starting with 8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522 not found: ID does not exist" containerID="8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.977038 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522"} err="failed to get container status \"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522\": rpc error: code = NotFound desc = could not find container \"8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522\": container with ID starting with 8fc0a97bd8b703c78d7bb9ea7967121451ed43ac25755a44ed1d84d569c92522 not found: ID does not exist" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.977053 4809 scope.go:117] "RemoveContainer" containerID="96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6" Feb 26 14:53:57 crc kubenswrapper[4809]: E0226 14:53:57.977365 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6\": container with ID starting with 96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6 not found: ID does not exist" containerID="96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6" Feb 26 14:53:57 crc kubenswrapper[4809]: I0226 14:53:57.977409 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6"} err="failed to get container status \"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6\": rpc error: code = NotFound desc = could not find container \"96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6\": container with ID starting with 96f3e7848770395933c310f26a321e7a95a2e2329744bf33afb447106e53e3f6 not found: ID does not exist" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.167445 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.176598 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pt92x"] Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.308254 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" path="/var/lib/kubelet/pods/c4464776-3503-46ca-8bfb-0c963f5db40c/volumes" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.428986 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.518443 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content\") pod \"3611b884-e396-4776-9a3b-7fb279d90bb9\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.518750 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkwg\" (UniqueName: \"kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg\") pod \"3611b884-e396-4776-9a3b-7fb279d90bb9\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.518884 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities\") pod \"3611b884-e396-4776-9a3b-7fb279d90bb9\" (UID: \"3611b884-e396-4776-9a3b-7fb279d90bb9\") " Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.520851 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities" (OuterVolumeSpecName: "utilities") pod "3611b884-e396-4776-9a3b-7fb279d90bb9" (UID: "3611b884-e396-4776-9a3b-7fb279d90bb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.525225 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg" (OuterVolumeSpecName: "kube-api-access-mrkwg") pod "3611b884-e396-4776-9a3b-7fb279d90bb9" (UID: "3611b884-e396-4776-9a3b-7fb279d90bb9"). InnerVolumeSpecName "kube-api-access-mrkwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.621653 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.621686 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkwg\" (UniqueName: \"kubernetes.io/projected/3611b884-e396-4776-9a3b-7fb279d90bb9-kube-api-access-mrkwg\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.677983 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3611b884-e396-4776-9a3b-7fb279d90bb9" (UID: "3611b884-e396-4776-9a3b-7fb279d90bb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.724069 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3611b884-e396-4776-9a3b-7fb279d90bb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.842248 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6lt" event={"ID":"3611b884-e396-4776-9a3b-7fb279d90bb9","Type":"ContainerDied","Data":"c5150342f61cb5850f645188b7f222e79945e222250a6a47d1cbc5664c6e781e"} Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.842277 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6lt" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.842300 4809 scope.go:117] "RemoveContainer" containerID="d6769f62d559f40396f585b9baf75e217820395d211a18697f6a90f4e7a80a47" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.867884 4809 scope.go:117] "RemoveContainer" containerID="fdeea7b545c15b190a96b05ef8f08b392b672c80169a35e7a730dc79f3c9836e" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.885313 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.900225 4809 scope.go:117] "RemoveContainer" containerID="d40e6cfd04deca30253bcb5a46e18d9582d2986c9d855d26a11b27770dcb59da" Feb 26 14:53:58 crc kubenswrapper[4809]: I0226 14:53:58.901335 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nw6lt"] Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143144 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535294-dn9hc"] Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143827 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143840 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143867 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143873 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143884 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143891 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143899 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143904 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="extract-content" Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143921 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143926 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4809]: E0226 14:54:00.143944 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.143949 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="extract-utilities" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.144175 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.144199 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4464776-3503-46ca-8bfb-0c963f5db40c" containerName="registry-server" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.144988 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.147146 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.147896 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.148935 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.166674 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-dn9hc"] Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.259198 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fds25\" (UniqueName: \"kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25\") pod \"auto-csr-approver-29535294-dn9hc\" (UID: \"44e9f26d-2936-433c-b595-3762d5fdb1cb\") " pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.270784 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3611b884-e396-4776-9a3b-7fb279d90bb9" path="/var/lib/kubelet/pods/3611b884-e396-4776-9a3b-7fb279d90bb9/volumes" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.361571 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fds25\" (UniqueName: \"kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25\") pod \"auto-csr-approver-29535294-dn9hc\" (UID: \"44e9f26d-2936-433c-b595-3762d5fdb1cb\") " pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.387441 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fds25\" (UniqueName: \"kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25\") pod \"auto-csr-approver-29535294-dn9hc\" (UID: \"44e9f26d-2936-433c-b595-3762d5fdb1cb\") " pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.500822 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:00 crc kubenswrapper[4809]: I0226 14:54:00.997914 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-dn9hc"] Feb 26 14:54:01 crc kubenswrapper[4809]: I0226 14:54:01.885428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" event={"ID":"44e9f26d-2936-433c-b595-3762d5fdb1cb","Type":"ContainerStarted","Data":"e49bf2eae01c2b3a22d33337a5b21d53240436b96817cdd2aa64ce3dcac99a72"} Feb 26 14:54:02 crc kubenswrapper[4809]: I0226 14:54:02.899819 4809 generic.go:334] "Generic (PLEG): container finished" podID="44e9f26d-2936-433c-b595-3762d5fdb1cb" containerID="a54faa2e4c8eb0b43c79ebf00cd21b4daa1d50335fa793406162e3ea7b00f3bf" exitCode=0 Feb 26 14:54:02 crc kubenswrapper[4809]: I0226 14:54:02.899883 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" event={"ID":"44e9f26d-2936-433c-b595-3762d5fdb1cb","Type":"ContainerDied","Data":"a54faa2e4c8eb0b43c79ebf00cd21b4daa1d50335fa793406162e3ea7b00f3bf"} Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.056890 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-cqrd5"] Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.071906 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-cqrd5"] Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.269209 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4145881d-ecb4-4082-9d47-09915db05fb6" path="/var/lib/kubelet/pods/4145881d-ecb4-4082-9d47-09915db05fb6/volumes" Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.371284 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.456461 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fds25\" (UniqueName: \"kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25\") pod \"44e9f26d-2936-433c-b595-3762d5fdb1cb\" (UID: \"44e9f26d-2936-433c-b595-3762d5fdb1cb\") " Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.462208 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25" (OuterVolumeSpecName: "kube-api-access-fds25") pod "44e9f26d-2936-433c-b595-3762d5fdb1cb" (UID: "44e9f26d-2936-433c-b595-3762d5fdb1cb"). InnerVolumeSpecName "kube-api-access-fds25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.559383 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fds25\" (UniqueName: \"kubernetes.io/projected/44e9f26d-2936-433c-b595-3762d5fdb1cb-kube-api-access-fds25\") on node \"crc\" DevicePath \"\"" Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.922407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" event={"ID":"44e9f26d-2936-433c-b595-3762d5fdb1cb","Type":"ContainerDied","Data":"e49bf2eae01c2b3a22d33337a5b21d53240436b96817cdd2aa64ce3dcac99a72"} Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.922455 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49bf2eae01c2b3a22d33337a5b21d53240436b96817cdd2aa64ce3dcac99a72" Feb 26 14:54:04 crc kubenswrapper[4809]: I0226 14:54:04.922469 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535294-dn9hc" Feb 26 14:54:05 crc kubenswrapper[4809]: I0226 14:54:05.441553 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-842tt"] Feb 26 14:54:05 crc kubenswrapper[4809]: I0226 14:54:05.456674 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535288-842tt"] Feb 26 14:54:06 crc kubenswrapper[4809]: I0226 14:54:06.296179 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0ebdaa-3d31-4715-bb02-241b564ad69c" path="/var/lib/kubelet/pods/0d0ebdaa-3d31-4715-bb02-241b564ad69c/volumes" Feb 26 14:54:10 crc kubenswrapper[4809]: I0226 14:54:10.731836 4809 scope.go:117] "RemoveContainer" containerID="197f34b03c1f5fa85c062535aeb7f5f41da4c5852984d61b88c0171a04078e86" Feb 26 14:54:10 crc kubenswrapper[4809]: I0226 14:54:10.791666 4809 scope.go:117] "RemoveContainer" containerID="c84b1b20373054ecd2e5a080b4188a6fdd14c7eda0fc36a5ee78774475e05e62" Feb 26 14:54:10 crc kubenswrapper[4809]: I0226 14:54:10.907221 4809 scope.go:117] "RemoveContainer" containerID="249b416ff52e64602436ff0aec2ff70da8ee80718fe6eda9ef8905c72730ab94" Feb 26 14:54:10 crc kubenswrapper[4809]: I0226 14:54:10.953617 4809 scope.go:117] "RemoveContainer" containerID="ad1d24417760e5b034492fd0f48b89f452925ca8ea477b5cf90f4a60013fa046" Feb 26 14:54:11 crc kubenswrapper[4809]: I0226 14:54:11.794298 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:54:11 crc kubenswrapper[4809]: I0226 14:54:11.794405 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:54:28 crc kubenswrapper[4809]: I0226 14:54:28.243664 4809 generic.go:334] "Generic (PLEG): container finished" podID="f33dc9c7-e973-434a-96c2-6712074b3ef8" containerID="39b578f5978adcb5b88f90a52fc62a65eaeb042f3e07d49c534f997f363bdf86" exitCode=0 Feb 26 14:54:28 crc kubenswrapper[4809]: I0226 14:54:28.243748 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" event={"ID":"f33dc9c7-e973-434a-96c2-6712074b3ef8","Type":"ContainerDied","Data":"39b578f5978adcb5b88f90a52fc62a65eaeb042f3e07d49c534f997f363bdf86"} Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.126509 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.229225 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddr75\" (UniqueName: \"kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75\") pod \"f33dc9c7-e973-434a-96c2-6712074b3ef8\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.229611 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam\") pod \"f33dc9c7-e973-434a-96c2-6712074b3ef8\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.229692 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory\") pod \"f33dc9c7-e973-434a-96c2-6712074b3ef8\" (UID: \"f33dc9c7-e973-434a-96c2-6712074b3ef8\") " Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.235097 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75" (OuterVolumeSpecName: "kube-api-access-ddr75") pod "f33dc9c7-e973-434a-96c2-6712074b3ef8" (UID: "f33dc9c7-e973-434a-96c2-6712074b3ef8"). InnerVolumeSpecName "kube-api-access-ddr75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.276843 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.284913 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory" (OuterVolumeSpecName: "inventory") pod "f33dc9c7-e973-434a-96c2-6712074b3ef8" (UID: "f33dc9c7-e973-434a-96c2-6712074b3ef8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.289586 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f33dc9c7-e973-434a-96c2-6712074b3ef8" (UID: "f33dc9c7-e973-434a-96c2-6712074b3ef8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.335404 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.335704 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33dc9c7-e973-434a-96c2-6712074b3ef8-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.335715 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddr75\" (UniqueName: \"kubernetes.io/projected/f33dc9c7-e973-434a-96c2-6712074b3ef8-kube-api-access-ddr75\") on node \"crc\" DevicePath \"\"" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.398547 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm" event={"ID":"f33dc9c7-e973-434a-96c2-6712074b3ef8","Type":"ContainerDied","Data":"cd4ac4e4cd2001cef706d26bf0392115ee3c4cb93f48c7d5fcf9360fdeceec75"} Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.398593 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4ac4e4cd2001cef706d26bf0392115ee3c4cb93f48c7d5fcf9360fdeceec75" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.398611 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9"] Feb 26 14:54:30 crc kubenswrapper[4809]: E0226 14:54:30.399230 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33dc9c7-e973-434a-96c2-6712074b3ef8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.399252 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33dc9c7-e973-434a-96c2-6712074b3ef8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 14:54:30 crc kubenswrapper[4809]: E0226 14:54:30.399267 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e9f26d-2936-433c-b595-3762d5fdb1cb" containerName="oc" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.399276 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e9f26d-2936-433c-b595-3762d5fdb1cb" containerName="oc" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.399544 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e9f26d-2936-433c-b595-3762d5fdb1cb" containerName="oc" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.399570 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33dc9c7-e973-434a-96c2-6712074b3ef8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.400549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.404173 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9"] Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.437751 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfm4\" (UniqueName: \"kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.437867 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.437971 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.538781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.538906 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfm4\" (UniqueName: \"kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.538990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.543734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.544881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.566405 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfm4\" (UniqueName: \"kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:30 crc kubenswrapper[4809]: I0226 14:54:30.731516 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:54:31 crc kubenswrapper[4809]: I0226 14:54:31.409759 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9"] Feb 26 14:54:31 crc kubenswrapper[4809]: W0226 14:54:31.412158 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod351b60bc_8ad8_4ac3_89bd_27877aeb981e.slice/crio-fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619 WatchSource:0}: Error finding container fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619: Status 404 returned error can't find the container with id fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619 Feb 26 14:54:32 crc kubenswrapper[4809]: I0226 14:54:32.308693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" event={"ID":"351b60bc-8ad8-4ac3-89bd-27877aeb981e","Type":"ContainerStarted","Data":"af437030c7501d762702b4be2c61d56e9797ad8abb0e5e44f70b33708cb9b0f9"} Feb 26 14:54:32 crc kubenswrapper[4809]: I0226 14:54:32.309403 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" event={"ID":"351b60bc-8ad8-4ac3-89bd-27877aeb981e","Type":"ContainerStarted","Data":"fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619"} Feb 26 14:54:32 crc kubenswrapper[4809]: I0226 14:54:32.345073 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" podStartSLOduration=1.9210945590000001 podStartE2EDuration="2.345046876s" podCreationTimestamp="2026-02-26 14:54:30 +0000 UTC" firstStartedPulling="2026-02-26 14:54:31.418132254 +0000 UTC m=+2449.891452777" lastFinishedPulling="2026-02-26 14:54:31.842084561 +0000 UTC m=+2450.315405094" observedRunningTime="2026-02-26 14:54:32.33196699 +0000 UTC m=+2450.805287553" watchObservedRunningTime="2026-02-26 14:54:32.345046876 +0000 UTC m=+2450.818367419" Feb 26 14:54:41 crc kubenswrapper[4809]: I0226 14:54:41.794556 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:54:41 crc kubenswrapper[4809]: I0226 14:54:41.795166 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:55:11 crc kubenswrapper[4809]: I0226 14:55:11.793821 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 14:55:11 crc kubenswrapper[4809]: I0226 14:55:11.794774 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 14:55:11 crc kubenswrapper[4809]: I0226 14:55:11.794863 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 14:55:11 crc kubenswrapper[4809]: I0226 14:55:11.797965 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 14:55:11 crc kubenswrapper[4809]: I0226 14:55:11.798819 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" gracePeriod=600 Feb 26 14:55:11 crc kubenswrapper[4809]: E0226 14:55:11.921681 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:55:12 crc kubenswrapper[4809]: I0226 14:55:12.902404 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" exitCode=0 Feb 26 14:55:12 crc kubenswrapper[4809]: I0226 14:55:12.902516 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79"} Feb 26 14:55:12 crc kubenswrapper[4809]: I0226 14:55:12.902765 4809 scope.go:117] "RemoveContainer" containerID="9b76648b2cbcf48bce5cd05e9f53422a1444b792201f4471d7d72fd10f1767d3" Feb 26 14:55:12 crc kubenswrapper[4809]: I0226 14:55:12.904232 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:55:12 crc kubenswrapper[4809]: E0226 14:55:12.905299 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:55:26 crc kubenswrapper[4809]: I0226 14:55:26.257137 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:55:26 crc kubenswrapper[4809]: E0226 14:55:26.257978 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:55:40 crc kubenswrapper[4809]: I0226 14:55:40.257914 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:55:40 crc kubenswrapper[4809]: E0226 14:55:40.258678 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:55:55 crc kubenswrapper[4809]: I0226 14:55:55.256717 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:55:55 crc kubenswrapper[4809]: E0226 14:55:55.257616 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:55:58 crc kubenswrapper[4809]: I0226 14:55:58.508176 4809 generic.go:334] "Generic (PLEG): container finished" podID="351b60bc-8ad8-4ac3-89bd-27877aeb981e" containerID="af437030c7501d762702b4be2c61d56e9797ad8abb0e5e44f70b33708cb9b0f9" exitCode=0 Feb 26 14:55:58 crc kubenswrapper[4809]: I0226 14:55:58.508238 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" event={"ID":"351b60bc-8ad8-4ac3-89bd-27877aeb981e","Type":"ContainerDied","Data":"af437030c7501d762702b4be2c61d56e9797ad8abb0e5e44f70b33708cb9b0f9"} Feb 26 14:55:59 crc kubenswrapper[4809]: I0226 14:55:59.994869 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.153255 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535296-mvh7b"] Feb 26 14:56:00 crc kubenswrapper[4809]: E0226 14:56:00.154166 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351b60bc-8ad8-4ac3-89bd-27877aeb981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.154188 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="351b60bc-8ad8-4ac3-89bd-27877aeb981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.154498 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="351b60bc-8ad8-4ac3-89bd-27877aeb981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.155556 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.157907 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.158091 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.158927 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.165154 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-mvh7b"] Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.170150 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfm4\" (UniqueName: \"kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4\") pod \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.170388 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory\") pod \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.170587 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam\") pod \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\" (UID: \"351b60bc-8ad8-4ac3-89bd-27877aeb981e\") " Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.177797 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4" (OuterVolumeSpecName: "kube-api-access-ccfm4") pod "351b60bc-8ad8-4ac3-89bd-27877aeb981e" (UID: "351b60bc-8ad8-4ac3-89bd-27877aeb981e"). InnerVolumeSpecName "kube-api-access-ccfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.207615 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "351b60bc-8ad8-4ac3-89bd-27877aeb981e" (UID: "351b60bc-8ad8-4ac3-89bd-27877aeb981e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.217357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory" (OuterVolumeSpecName: "inventory") pod "351b60bc-8ad8-4ac3-89bd-27877aeb981e" (UID: "351b60bc-8ad8-4ac3-89bd-27877aeb981e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.273939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gk2m\" (UniqueName: \"kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m\") pod \"auto-csr-approver-29535296-mvh7b\" (UID: \"29df1fbc-1739-4d79-a692-de2ca9570d28\") " pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.274141 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.274161 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/351b60bc-8ad8-4ac3-89bd-27877aeb981e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.274171 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfm4\" (UniqueName: \"kubernetes.io/projected/351b60bc-8ad8-4ac3-89bd-27877aeb981e-kube-api-access-ccfm4\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.376853 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gk2m\" (UniqueName: \"kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m\") pod \"auto-csr-approver-29535296-mvh7b\" (UID: \"29df1fbc-1739-4d79-a692-de2ca9570d28\") " pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.412915 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gk2m\" (UniqueName: \"kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m\") pod \"auto-csr-approver-29535296-mvh7b\" (UID: \"29df1fbc-1739-4d79-a692-de2ca9570d28\") " pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.479988 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.539426 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" event={"ID":"351b60bc-8ad8-4ac3-89bd-27877aeb981e","Type":"ContainerDied","Data":"fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619"} Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.539474 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7d35f0583cead91ae5466b666fb8753a84c1d1786b985ffb18e160ff5e9619" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.539530 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.635858 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl"] Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.637937 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.640747 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.641070 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.641270 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.641439 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.676592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl"] Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.698692 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.698915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.698941 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457cm\" (UniqueName: \"kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.800264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.800304 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-457cm\" (UniqueName: \"kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.800412 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.806888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.808491 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.816253 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-457cm\" (UniqueName: \"kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.971263 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:00 crc kubenswrapper[4809]: I0226 14:56:00.997906 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-mvh7b"] Feb 26 14:56:01 crc kubenswrapper[4809]: I0226 14:56:01.020957 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 14:56:01 crc kubenswrapper[4809]: I0226 14:56:01.551208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" event={"ID":"29df1fbc-1739-4d79-a692-de2ca9570d28","Type":"ContainerStarted","Data":"a8b69da24e93d6791b76a82f3fca333023525339f466ae37c109115357a19e72"} Feb 26 14:56:01 crc kubenswrapper[4809]: W0226 14:56:01.637118 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fb723f4_b0eb_4520_a602_c723a935d0c6.slice/crio-414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b WatchSource:0}: Error finding container 414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b: Status 404 returned error can't find the container with id 414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b Feb 26 14:56:01 crc kubenswrapper[4809]: I0226 14:56:01.639704 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl"] Feb 26 14:56:02 crc kubenswrapper[4809]: I0226 14:56:02.591134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" event={"ID":"9fb723f4-b0eb-4520-a602-c723a935d0c6","Type":"ContainerStarted","Data":"bdf4b1bcffe65567ed3fd1c8a2356e0aa5ee1d0b2697ba277038978e928e6951"} Feb 26 14:56:02 crc kubenswrapper[4809]: I0226 14:56:02.592805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" event={"ID":"9fb723f4-b0eb-4520-a602-c723a935d0c6","Type":"ContainerStarted","Data":"414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b"} Feb 26 14:56:02 crc kubenswrapper[4809]: I0226 14:56:02.613403 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" podStartSLOduration=2.187877045 podStartE2EDuration="2.613383207s" podCreationTimestamp="2026-02-26 14:56:00 +0000 UTC" firstStartedPulling="2026-02-26 14:56:01.639499137 +0000 UTC m=+2540.112819660" lastFinishedPulling="2026-02-26 14:56:02.065005279 +0000 UTC m=+2540.538325822" observedRunningTime="2026-02-26 14:56:02.605747978 +0000 UTC m=+2541.079068531" watchObservedRunningTime="2026-02-26 14:56:02.613383207 +0000 UTC m=+2541.086703730" Feb 26 14:56:04 crc kubenswrapper[4809]: I0226 14:56:04.070425 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-mnms7"] Feb 26 14:56:04 crc kubenswrapper[4809]: I0226 14:56:04.083132 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-mnms7"] Feb 26 14:56:04 crc kubenswrapper[4809]: I0226 14:56:04.288734 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd" path="/var/lib/kubelet/pods/b5d7ec09-9b8d-45fb-9a52-5dec35ef7ddd/volumes" Feb 26 14:56:06 crc kubenswrapper[4809]: I0226 14:56:06.257921 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:56:06 crc kubenswrapper[4809]: E0226 14:56:06.258768 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:56:07 crc kubenswrapper[4809]: I0226 14:56:07.650334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" event={"ID":"29df1fbc-1739-4d79-a692-de2ca9570d28","Type":"ContainerStarted","Data":"e1034ee5383864858fa9cbd9672825d3108fabe8f72d30d63718e78d4d464096"} Feb 26 14:56:07 crc kubenswrapper[4809]: I0226 14:56:07.654077 4809 generic.go:334] "Generic (PLEG): container finished" podID="9fb723f4-b0eb-4520-a602-c723a935d0c6" containerID="bdf4b1bcffe65567ed3fd1c8a2356e0aa5ee1d0b2697ba277038978e928e6951" exitCode=0 Feb 26 14:56:07 crc kubenswrapper[4809]: I0226 14:56:07.654176 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" event={"ID":"9fb723f4-b0eb-4520-a602-c723a935d0c6","Type":"ContainerDied","Data":"bdf4b1bcffe65567ed3fd1c8a2356e0aa5ee1d0b2697ba277038978e928e6951"} Feb 26 14:56:07 crc kubenswrapper[4809]: I0226 14:56:07.694411 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" podStartSLOduration=1.673966637 podStartE2EDuration="7.694383011s" podCreationTimestamp="2026-02-26 14:56:00 +0000 UTC" firstStartedPulling="2026-02-26 14:56:01.020616176 +0000 UTC m=+2539.493936719" lastFinishedPulling="2026-02-26 14:56:07.04103258 +0000 UTC m=+2545.514353093" observedRunningTime="2026-02-26 14:56:07.670424744 +0000 UTC m=+2546.143745277" watchObservedRunningTime="2026-02-26 14:56:07.694383011 +0000 UTC m=+2546.167703544" Feb 26 14:56:08 crc kubenswrapper[4809]: I0226 14:56:08.674578 4809 generic.go:334] "Generic (PLEG): container finished" podID="29df1fbc-1739-4d79-a692-de2ca9570d28" containerID="e1034ee5383864858fa9cbd9672825d3108fabe8f72d30d63718e78d4d464096" exitCode=0 Feb 26 14:56:08 crc kubenswrapper[4809]: I0226 14:56:08.674678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" event={"ID":"29df1fbc-1739-4d79-a692-de2ca9570d28","Type":"ContainerDied","Data":"e1034ee5383864858fa9cbd9672825d3108fabe8f72d30d63718e78d4d464096"} Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.300489 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.428351 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-457cm\" (UniqueName: \"kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm\") pod \"9fb723f4-b0eb-4520-a602-c723a935d0c6\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.428952 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam\") pod \"9fb723f4-b0eb-4520-a602-c723a935d0c6\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.429079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory\") pod \"9fb723f4-b0eb-4520-a602-c723a935d0c6\" (UID: \"9fb723f4-b0eb-4520-a602-c723a935d0c6\") " Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.435486 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm" (OuterVolumeSpecName: "kube-api-access-457cm") pod "9fb723f4-b0eb-4520-a602-c723a935d0c6" (UID: "9fb723f4-b0eb-4520-a602-c723a935d0c6"). InnerVolumeSpecName "kube-api-access-457cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.472862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9fb723f4-b0eb-4520-a602-c723a935d0c6" (UID: "9fb723f4-b0eb-4520-a602-c723a935d0c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.487109 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory" (OuterVolumeSpecName: "inventory") pod "9fb723f4-b0eb-4520-a602-c723a935d0c6" (UID: "9fb723f4-b0eb-4520-a602-c723a935d0c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.533030 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.533073 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb723f4-b0eb-4520-a602-c723a935d0c6-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.533087 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-457cm\" (UniqueName: \"kubernetes.io/projected/9fb723f4-b0eb-4520-a602-c723a935d0c6-kube-api-access-457cm\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.689696 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.689695 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl" event={"ID":"9fb723f4-b0eb-4520-a602-c723a935d0c6","Type":"ContainerDied","Data":"414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b"} Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.693192 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414d4945f6cd206aad4a4e260c4d528a16512fdf188a3852b4f99a343d468c2b" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.880281 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm"] Feb 26 14:56:09 crc kubenswrapper[4809]: E0226 14:56:09.880822 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb723f4-b0eb-4520-a602-c723a935d0c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.880837 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb723f4-b0eb-4520-a602-c723a935d0c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.881146 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb723f4-b0eb-4520-a602-c723a935d0c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.881957 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.884098 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.884341 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.884735 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.884757 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:56:09 crc kubenswrapper[4809]: I0226 14:56:09.895437 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm"] Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.045823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmt2\" (UniqueName: \"kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.046392 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.046439 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.101367 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.149397 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gk2m\" (UniqueName: \"kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m\") pod \"29df1fbc-1739-4d79-a692-de2ca9570d28\" (UID: \"29df1fbc-1739-4d79-a692-de2ca9570d28\") " Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.149737 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wmt2\" (UniqueName: \"kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.149822 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.149849 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.153559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.155407 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m" (OuterVolumeSpecName: "kube-api-access-9gk2m") pod "29df1fbc-1739-4d79-a692-de2ca9570d28" (UID: "29df1fbc-1739-4d79-a692-de2ca9570d28"). InnerVolumeSpecName "kube-api-access-9gk2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.156685 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.173315 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wmt2\" (UniqueName: \"kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qsmkm\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.206257 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.253040 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gk2m\" (UniqueName: \"kubernetes.io/projected/29df1fbc-1739-4d79-a692-de2ca9570d28-kube-api-access-9gk2m\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.703445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" event={"ID":"29df1fbc-1739-4d79-a692-de2ca9570d28","Type":"ContainerDied","Data":"a8b69da24e93d6791b76a82f3fca333023525339f466ae37c109115357a19e72"} Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.703502 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b69da24e93d6791b76a82f3fca333023525339f466ae37c109115357a19e72" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.703563 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535296-mvh7b" Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.743691 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-sqr6p"] Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.756128 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535290-sqr6p"] Feb 26 14:56:10 crc kubenswrapper[4809]: I0226 14:56:10.771242 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm"] Feb 26 14:56:10 crc kubenswrapper[4809]: W0226 14:56:10.777443 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34408b8_4589_48e2_b94c_58a98817be4c.slice/crio-e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d WatchSource:0}: Error finding container e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d: Status 404 returned error can't find the container with id e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d Feb 26 14:56:11 crc kubenswrapper[4809]: I0226 14:56:11.224268 4809 scope.go:117] "RemoveContainer" containerID="4bfce04cf3f0603489992f7c0230f510181e5ef797f594767ad222cbf44927aa" Feb 26 14:56:11 crc kubenswrapper[4809]: I0226 14:56:11.716311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" event={"ID":"b34408b8-4589-48e2-b94c-58a98817be4c","Type":"ContainerStarted","Data":"a646455c66c1cbc8d15a208888c405802279f422530d3aab6a5436ae8b8fcbca"} Feb 26 14:56:11 crc kubenswrapper[4809]: I0226 14:56:11.716727 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" event={"ID":"b34408b8-4589-48e2-b94c-58a98817be4c","Type":"ContainerStarted","Data":"e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d"} Feb 26 14:56:11 crc kubenswrapper[4809]: I0226 14:56:11.738644 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" podStartSLOduration=2.348828063 podStartE2EDuration="2.73862516s" podCreationTimestamp="2026-02-26 14:56:09 +0000 UTC" firstStartedPulling="2026-02-26 14:56:10.779423971 +0000 UTC m=+2549.252744494" lastFinishedPulling="2026-02-26 14:56:11.169221038 +0000 UTC m=+2549.642541591" observedRunningTime="2026-02-26 14:56:11.737517618 +0000 UTC m=+2550.210838151" watchObservedRunningTime="2026-02-26 14:56:11.73862516 +0000 UTC m=+2550.211945683" Feb 26 14:56:12 crc kubenswrapper[4809]: I0226 14:56:12.704850 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb08ae20-f6b0-489a-a408-5efee2ec79f0" path="/var/lib/kubelet/pods/eb08ae20-f6b0-489a-a408-5efee2ec79f0/volumes" Feb 26 14:56:20 crc kubenswrapper[4809]: I0226 14:56:20.257488 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:56:20 crc kubenswrapper[4809]: E0226 14:56:20.258769 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:56:33 crc kubenswrapper[4809]: I0226 14:56:33.257896 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:56:33 crc kubenswrapper[4809]: E0226 14:56:33.258961 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:56:44 crc kubenswrapper[4809]: I0226 14:56:44.257226 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:56:44 crc kubenswrapper[4809]: E0226 14:56:44.259042 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:56:49 crc kubenswrapper[4809]: I0226 14:56:49.160680 4809 generic.go:334] "Generic (PLEG): container finished" podID="b34408b8-4589-48e2-b94c-58a98817be4c" containerID="a646455c66c1cbc8d15a208888c405802279f422530d3aab6a5436ae8b8fcbca" exitCode=0 Feb 26 14:56:49 crc kubenswrapper[4809]: I0226 14:56:49.160786 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" event={"ID":"b34408b8-4589-48e2-b94c-58a98817be4c","Type":"ContainerDied","Data":"a646455c66c1cbc8d15a208888c405802279f422530d3aab6a5436ae8b8fcbca"} Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.743494 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.833184 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory\") pod \"b34408b8-4589-48e2-b94c-58a98817be4c\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.833414 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmt2\" (UniqueName: \"kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2\") pod \"b34408b8-4589-48e2-b94c-58a98817be4c\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.834051 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam\") pod \"b34408b8-4589-48e2-b94c-58a98817be4c\" (UID: \"b34408b8-4589-48e2-b94c-58a98817be4c\") " Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.843988 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2" (OuterVolumeSpecName: "kube-api-access-4wmt2") pod "b34408b8-4589-48e2-b94c-58a98817be4c" (UID: "b34408b8-4589-48e2-b94c-58a98817be4c"). InnerVolumeSpecName "kube-api-access-4wmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.871833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b34408b8-4589-48e2-b94c-58a98817be4c" (UID: "b34408b8-4589-48e2-b94c-58a98817be4c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.895031 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory" (OuterVolumeSpecName: "inventory") pod "b34408b8-4589-48e2-b94c-58a98817be4c" (UID: "b34408b8-4589-48e2-b94c-58a98817be4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.936756 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wmt2\" (UniqueName: \"kubernetes.io/projected/b34408b8-4589-48e2-b94c-58a98817be4c-kube-api-access-4wmt2\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.936794 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:50 crc kubenswrapper[4809]: I0226 14:56:50.936806 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b34408b8-4589-48e2-b94c-58a98817be4c-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.189071 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" event={"ID":"b34408b8-4589-48e2-b94c-58a98817be4c","Type":"ContainerDied","Data":"e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d"} Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.189448 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d754989fa42918565e3939ed40b0e5625095e93837e08f8c1dc65380e3809d" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.189141 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qsmkm" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.295664 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8"] Feb 26 14:56:51 crc kubenswrapper[4809]: E0226 14:56:51.296206 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34408b8-4589-48e2-b94c-58a98817be4c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.296227 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34408b8-4589-48e2-b94c-58a98817be4c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:51 crc kubenswrapper[4809]: E0226 14:56:51.296279 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29df1fbc-1739-4d79-a692-de2ca9570d28" containerName="oc" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.296285 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="29df1fbc-1739-4d79-a692-de2ca9570d28" containerName="oc" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.296537 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="29df1fbc-1739-4d79-a692-de2ca9570d28" containerName="oc" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.296569 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34408b8-4589-48e2-b94c-58a98817be4c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.297378 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.301226 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.302159 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.302197 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.307758 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.324655 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8"] Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.448712 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bsbg\" (UniqueName: \"kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.448789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.448819 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.551355 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bsbg\" (UniqueName: \"kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.551485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.551540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.555838 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.557103 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.569905 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bsbg\" (UniqueName: \"kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:51 crc kubenswrapper[4809]: I0226 14:56:51.625755 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:56:52 crc kubenswrapper[4809]: I0226 14:56:52.173372 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8"] Feb 26 14:56:52 crc kubenswrapper[4809]: W0226 14:56:52.181187 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2154870_7448_40fd_b259_7a0a77cda1ef.slice/crio-5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f WatchSource:0}: Error finding container 5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f: Status 404 returned error can't find the container with id 5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f Feb 26 14:56:52 crc kubenswrapper[4809]: I0226 14:56:52.224308 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" event={"ID":"a2154870-7448-40fd-b259-7a0a77cda1ef","Type":"ContainerStarted","Data":"5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f"} Feb 26 14:56:53 crc kubenswrapper[4809]: I0226 14:56:53.240239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" event={"ID":"a2154870-7448-40fd-b259-7a0a77cda1ef","Type":"ContainerStarted","Data":"17f304faa48f9bed7262410fee1748d7e6542420c70e4bee2bb9cf5646b6219c"} Feb 26 14:56:57 crc kubenswrapper[4809]: I0226 14:56:57.257229 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:56:57 crc kubenswrapper[4809]: E0226 14:56:57.258293 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:56:58 crc kubenswrapper[4809]: I0226 14:56:58.074964 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" podStartSLOduration=6.664039245 podStartE2EDuration="7.074947148s" podCreationTimestamp="2026-02-26 14:56:51 +0000 UTC" firstStartedPulling="2026-02-26 14:56:52.193371009 +0000 UTC m=+2590.666691532" lastFinishedPulling="2026-02-26 14:56:52.604278912 +0000 UTC m=+2591.077599435" observedRunningTime="2026-02-26 14:56:53.286040228 +0000 UTC m=+2591.759360761" watchObservedRunningTime="2026-02-26 14:56:58.074947148 +0000 UTC m=+2596.548267671" Feb 26 14:56:58 crc kubenswrapper[4809]: I0226 14:56:58.091073 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-nww5r"] Feb 26 14:56:58 crc kubenswrapper[4809]: I0226 14:56:58.113894 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-nww5r"] Feb 26 14:56:58 crc kubenswrapper[4809]: I0226 14:56:58.273291 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8e6711-9d3f-4961-84c5-defbf691d665" path="/var/lib/kubelet/pods/9b8e6711-9d3f-4961-84c5-defbf691d665/volumes" Feb 26 14:57:11 crc kubenswrapper[4809]: I0226 14:57:11.258513 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:57:11 crc kubenswrapper[4809]: E0226 14:57:11.262100 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:57:11 crc kubenswrapper[4809]: I0226 14:57:11.390122 4809 scope.go:117] "RemoveContainer" containerID="201ba42ac0c0cc0d72c2668a279ca6cc31c2e002c071969cdd700216b3313e2f" Feb 26 14:57:11 crc kubenswrapper[4809]: I0226 14:57:11.981338 4809 scope.go:117] "RemoveContainer" containerID="78cf1dc9392607eeb8a3ca38c10fd2d791168d9e1b5f8c74fb27ddea1850c48a" Feb 26 14:57:24 crc kubenswrapper[4809]: I0226 14:57:24.257831 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:57:24 crc kubenswrapper[4809]: E0226 14:57:24.258817 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:57:37 crc kubenswrapper[4809]: I0226 14:57:37.257346 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:57:37 crc kubenswrapper[4809]: E0226 14:57:37.259500 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:57:46 crc kubenswrapper[4809]: I0226 14:57:46.923904 4809 generic.go:334] "Generic (PLEG): container finished" podID="a2154870-7448-40fd-b259-7a0a77cda1ef" containerID="17f304faa48f9bed7262410fee1748d7e6542420c70e4bee2bb9cf5646b6219c" exitCode=0 Feb 26 14:57:46 crc kubenswrapper[4809]: I0226 14:57:46.923982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" event={"ID":"a2154870-7448-40fd-b259-7a0a77cda1ef","Type":"ContainerDied","Data":"17f304faa48f9bed7262410fee1748d7e6542420c70e4bee2bb9cf5646b6219c"} Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.475398 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.615806 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bsbg\" (UniqueName: \"kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg\") pod \"a2154870-7448-40fd-b259-7a0a77cda1ef\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.615887 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam\") pod \"a2154870-7448-40fd-b259-7a0a77cda1ef\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.616059 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory\") pod \"a2154870-7448-40fd-b259-7a0a77cda1ef\" (UID: \"a2154870-7448-40fd-b259-7a0a77cda1ef\") " Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.620943 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg" (OuterVolumeSpecName: "kube-api-access-2bsbg") pod "a2154870-7448-40fd-b259-7a0a77cda1ef" (UID: "a2154870-7448-40fd-b259-7a0a77cda1ef"). InnerVolumeSpecName "kube-api-access-2bsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.645499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2154870-7448-40fd-b259-7a0a77cda1ef" (UID: "a2154870-7448-40fd-b259-7a0a77cda1ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.645858 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory" (OuterVolumeSpecName: "inventory") pod "a2154870-7448-40fd-b259-7a0a77cda1ef" (UID: "a2154870-7448-40fd-b259-7a0a77cda1ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.719987 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bsbg\" (UniqueName: \"kubernetes.io/projected/a2154870-7448-40fd-b259-7a0a77cda1ef-kube-api-access-2bsbg\") on node \"crc\" DevicePath \"\"" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.720067 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.720088 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2154870-7448-40fd-b259-7a0a77cda1ef-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.946757 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" event={"ID":"a2154870-7448-40fd-b259-7a0a77cda1ef","Type":"ContainerDied","Data":"5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f"} Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.947455 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ade0aa38ce68294f63a7a9a2e7874844b8767e3068f5d264ab8885dcf9fa65f" Feb 26 14:57:48 crc kubenswrapper[4809]: I0226 14:57:48.946812 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.060325 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mnhs8"] Feb 26 14:57:49 crc kubenswrapper[4809]: E0226 14:57:49.060836 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2154870-7448-40fd-b259-7a0a77cda1ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.060850 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2154870-7448-40fd-b259-7a0a77cda1ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.061145 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2154870-7448-40fd-b259-7a0a77cda1ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.062078 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.070994 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.071041 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.071194 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.071232 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.083776 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mnhs8"] Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.128771 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.128865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569l7\" (UniqueName: \"kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.129032 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.231185 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.231570 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.231852 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569l7\" (UniqueName: \"kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.235895 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.238635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.251447 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569l7\" (UniqueName: \"kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7\") pod \"ssh-known-hosts-edpm-deployment-mnhs8\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:49 crc kubenswrapper[4809]: I0226 14:57:49.381955 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:50 crc kubenswrapper[4809]: I0226 14:57:50.002177 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mnhs8"] Feb 26 14:57:50 crc kubenswrapper[4809]: W0226 14:57:50.002822 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8e2003_a428_496b_b735_9d4e242712a9.slice/crio-5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930 WatchSource:0}: Error finding container 5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930: Status 404 returned error can't find the container with id 5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930 Feb 26 14:57:50 crc kubenswrapper[4809]: I0226 14:57:50.257864 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:57:50 crc kubenswrapper[4809]: E0226 14:57:50.258189 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:57:50 crc kubenswrapper[4809]: I0226 14:57:50.980269 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" event={"ID":"9f8e2003-a428-496b-b735-9d4e242712a9","Type":"ContainerStarted","Data":"5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930"} Feb 26 14:57:52 crc kubenswrapper[4809]: I0226 14:57:52.002925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" event={"ID":"9f8e2003-a428-496b-b735-9d4e242712a9","Type":"ContainerStarted","Data":"893d4db2cf8121449381de15f8e5213878d5b5f3e758f8cbca9a80969d916fe0"} Feb 26 14:57:52 crc kubenswrapper[4809]: I0226 14:57:52.027796 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" podStartSLOduration=2.328757177 podStartE2EDuration="3.027776549s" podCreationTimestamp="2026-02-26 14:57:49 +0000 UTC" firstStartedPulling="2026-02-26 14:57:50.006167759 +0000 UTC m=+2648.479488302" lastFinishedPulling="2026-02-26 14:57:50.705187141 +0000 UTC m=+2649.178507674" observedRunningTime="2026-02-26 14:57:52.021720675 +0000 UTC m=+2650.495041198" watchObservedRunningTime="2026-02-26 14:57:52.027776549 +0000 UTC m=+2650.501097082" Feb 26 14:57:58 crc kubenswrapper[4809]: I0226 14:57:58.099840 4809 generic.go:334] "Generic (PLEG): container finished" podID="9f8e2003-a428-496b-b735-9d4e242712a9" containerID="893d4db2cf8121449381de15f8e5213878d5b5f3e758f8cbca9a80969d916fe0" exitCode=0 Feb 26 14:57:58 crc kubenswrapper[4809]: I0226 14:57:58.099960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" event={"ID":"9f8e2003-a428-496b-b735-9d4e242712a9","Type":"ContainerDied","Data":"893d4db2cf8121449381de15f8e5213878d5b5f3e758f8cbca9a80969d916fe0"} Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.673799 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.724769 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-569l7\" (UniqueName: \"kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7\") pod \"9f8e2003-a428-496b-b735-9d4e242712a9\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.724830 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam\") pod \"9f8e2003-a428-496b-b735-9d4e242712a9\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.724884 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0\") pod \"9f8e2003-a428-496b-b735-9d4e242712a9\" (UID: \"9f8e2003-a428-496b-b735-9d4e242712a9\") " Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.731878 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7" (OuterVolumeSpecName: "kube-api-access-569l7") pod "9f8e2003-a428-496b-b735-9d4e242712a9" (UID: "9f8e2003-a428-496b-b735-9d4e242712a9"). InnerVolumeSpecName "kube-api-access-569l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.759050 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f8e2003-a428-496b-b735-9d4e242712a9" (UID: "9f8e2003-a428-496b-b735-9d4e242712a9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.760473 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9f8e2003-a428-496b-b735-9d4e242712a9" (UID: "9f8e2003-a428-496b-b735-9d4e242712a9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.827949 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-569l7\" (UniqueName: \"kubernetes.io/projected/9f8e2003-a428-496b-b735-9d4e242712a9-kube-api-access-569l7\") on node \"crc\" DevicePath \"\"" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.828174 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:57:59 crc kubenswrapper[4809]: I0226 14:57:59.828261 4809 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8e2003-a428-496b-b735-9d4e242712a9-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.131514 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" event={"ID":"9f8e2003-a428-496b-b735-9d4e242712a9","Type":"ContainerDied","Data":"5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930"} Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.131877 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e2874f6c6cf31a6a3393cd90da7415d413bad5afad92bf09cb6a37aa4bfa930" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.131595 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mnhs8" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.156528 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535298-2hbp5"] Feb 26 14:58:00 crc kubenswrapper[4809]: E0226 14:58:00.157509 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8e2003-a428-496b-b735-9d4e242712a9" containerName="ssh-known-hosts-edpm-deployment" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.157542 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8e2003-a428-496b-b735-9d4e242712a9" containerName="ssh-known-hosts-edpm-deployment" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.158006 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8e2003-a428-496b-b735-9d4e242712a9" containerName="ssh-known-hosts-edpm-deployment" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.159477 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.162138 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.162587 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.165003 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.189036 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-2hbp5"] Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.239686 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvddl\" (UniqueName: \"kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl\") pod \"auto-csr-approver-29535298-2hbp5\" (UID: \"9acc1d4b-e84e-4760-a5c0-ce567be35ec1\") " pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.273054 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84"] Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.275636 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.277652 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.278109 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.278267 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.278484 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.281304 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84"] Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.346459 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvddl\" (UniqueName: \"kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl\") pod \"auto-csr-approver-29535298-2hbp5\" (UID: \"9acc1d4b-e84e-4760-a5c0-ce567be35ec1\") " pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.348292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jlp\" (UniqueName: \"kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.348544 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.348700 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.399880 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvddl\" (UniqueName: \"kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl\") pod \"auto-csr-approver-29535298-2hbp5\" (UID: \"9acc1d4b-e84e-4760-a5c0-ce567be35ec1\") " pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.455915 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jlp\" (UniqueName: \"kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.456153 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.456311 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.460200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.465912 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.483817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jlp\" (UniqueName: \"kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b2h84\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.519977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:00 crc kubenswrapper[4809]: I0226 14:58:00.603537 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:01 crc kubenswrapper[4809]: I0226 14:58:01.063269 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-2hbp5"] Feb 26 14:58:01 crc kubenswrapper[4809]: I0226 14:58:01.143625 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" event={"ID":"9acc1d4b-e84e-4760-a5c0-ce567be35ec1","Type":"ContainerStarted","Data":"3b951962c44529a2f71777ac5322583fecaba5b049edf6185c7506966a2576da"} Feb 26 14:58:01 crc kubenswrapper[4809]: I0226 14:58:01.218922 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84"] Feb 26 14:58:01 crc kubenswrapper[4809]: W0226 14:58:01.224751 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc491f7d7_7607_4605_b5fb_312493e0bebf.slice/crio-2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77 WatchSource:0}: Error finding container 2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77: Status 404 returned error can't find the container with id 2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77 Feb 26 14:58:02 crc kubenswrapper[4809]: I0226 14:58:02.153067 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" event={"ID":"c491f7d7-7607-4605-b5fb-312493e0bebf","Type":"ContainerStarted","Data":"2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77"} Feb 26 14:58:02 crc kubenswrapper[4809]: I0226 14:58:02.275365 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:58:02 crc kubenswrapper[4809]: E0226 14:58:02.275651 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:58:03 crc kubenswrapper[4809]: I0226 14:58:03.187368 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" event={"ID":"c491f7d7-7607-4605-b5fb-312493e0bebf","Type":"ContainerStarted","Data":"6e320fcf3009520ad5fc7e09dc9cdb8983f7a5bbb94a01ac1389a23256f31471"} Feb 26 14:58:03 crc kubenswrapper[4809]: I0226 14:58:03.191186 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" event={"ID":"9acc1d4b-e84e-4760-a5c0-ce567be35ec1","Type":"ContainerStarted","Data":"81fd6b967327bf7b7e7b33a71177385b6890eeb2c3df7c4c7c58896e738525d9"} Feb 26 14:58:03 crc kubenswrapper[4809]: I0226 14:58:03.206558 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" podStartSLOduration=1.799142583 podStartE2EDuration="3.206538775s" podCreationTimestamp="2026-02-26 14:58:00 +0000 UTC" firstStartedPulling="2026-02-26 14:58:01.048396318 +0000 UTC m=+2659.521716841" lastFinishedPulling="2026-02-26 14:58:02.45579249 +0000 UTC m=+2660.929113033" observedRunningTime="2026-02-26 14:58:03.202574421 +0000 UTC m=+2661.675894944" watchObservedRunningTime="2026-02-26 14:58:03.206538775 +0000 UTC m=+2661.679859298" Feb 26 14:58:03 crc kubenswrapper[4809]: I0226 14:58:03.224179 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" podStartSLOduration=1.8080415890000001 podStartE2EDuration="3.224163011s" podCreationTimestamp="2026-02-26 14:58:00 +0000 UTC" firstStartedPulling="2026-02-26 14:58:01.226969573 +0000 UTC m=+2659.700290096" lastFinishedPulling="2026-02-26 14:58:02.643090985 +0000 UTC m=+2661.116411518" observedRunningTime="2026-02-26 14:58:03.218972952 +0000 UTC m=+2661.692293475" watchObservedRunningTime="2026-02-26 14:58:03.224163011 +0000 UTC m=+2661.697483534" Feb 26 14:58:04 crc kubenswrapper[4809]: I0226 14:58:04.203260 4809 generic.go:334] "Generic (PLEG): container finished" podID="9acc1d4b-e84e-4760-a5c0-ce567be35ec1" containerID="81fd6b967327bf7b7e7b33a71177385b6890eeb2c3df7c4c7c58896e738525d9" exitCode=0 Feb 26 14:58:04 crc kubenswrapper[4809]: I0226 14:58:04.203358 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" event={"ID":"9acc1d4b-e84e-4760-a5c0-ce567be35ec1","Type":"ContainerDied","Data":"81fd6b967327bf7b7e7b33a71177385b6890eeb2c3df7c4c7c58896e738525d9"} Feb 26 14:58:05 crc kubenswrapper[4809]: I0226 14:58:05.720822 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:05 crc kubenswrapper[4809]: I0226 14:58:05.796722 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvddl\" (UniqueName: \"kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl\") pod \"9acc1d4b-e84e-4760-a5c0-ce567be35ec1\" (UID: \"9acc1d4b-e84e-4760-a5c0-ce567be35ec1\") " Feb 26 14:58:05 crc kubenswrapper[4809]: I0226 14:58:05.803764 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl" (OuterVolumeSpecName: "kube-api-access-rvddl") pod "9acc1d4b-e84e-4760-a5c0-ce567be35ec1" (UID: "9acc1d4b-e84e-4760-a5c0-ce567be35ec1"). InnerVolumeSpecName "kube-api-access-rvddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:58:05 crc kubenswrapper[4809]: I0226 14:58:05.903313 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvddl\" (UniqueName: \"kubernetes.io/projected/9acc1d4b-e84e-4760-a5c0-ce567be35ec1-kube-api-access-rvddl\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:06 crc kubenswrapper[4809]: I0226 14:58:06.235921 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" event={"ID":"9acc1d4b-e84e-4760-a5c0-ce567be35ec1","Type":"ContainerDied","Data":"3b951962c44529a2f71777ac5322583fecaba5b049edf6185c7506966a2576da"} Feb 26 14:58:06 crc kubenswrapper[4809]: I0226 14:58:06.235963 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b951962c44529a2f71777ac5322583fecaba5b049edf6185c7506966a2576da" Feb 26 14:58:06 crc kubenswrapper[4809]: I0226 14:58:06.235984 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535298-2hbp5" Feb 26 14:58:06 crc kubenswrapper[4809]: I0226 14:58:06.811389 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-kt9rn"] Feb 26 14:58:06 crc kubenswrapper[4809]: I0226 14:58:06.825787 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535292-kt9rn"] Feb 26 14:58:08 crc kubenswrapper[4809]: I0226 14:58:08.270611 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52890012-1f14-4113-b279-fd2a240978da" path="/var/lib/kubelet/pods/52890012-1f14-4113-b279-fd2a240978da/volumes" Feb 26 14:58:11 crc kubenswrapper[4809]: I0226 14:58:11.286238 4809 generic.go:334] "Generic (PLEG): container finished" podID="c491f7d7-7607-4605-b5fb-312493e0bebf" containerID="6e320fcf3009520ad5fc7e09dc9cdb8983f7a5bbb94a01ac1389a23256f31471" exitCode=0 Feb 26 14:58:11 crc kubenswrapper[4809]: I0226 14:58:11.286325 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" event={"ID":"c491f7d7-7607-4605-b5fb-312493e0bebf","Type":"ContainerDied","Data":"6e320fcf3009520ad5fc7e09dc9cdb8983f7a5bbb94a01ac1389a23256f31471"} Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.165903 4809 scope.go:117] "RemoveContainer" containerID="7121226dda8ab5e6a3693fc5102ef7f1c0ebbaa0033d462a05b876ee8af27d58" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.768845 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.891937 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam\") pod \"c491f7d7-7607-4605-b5fb-312493e0bebf\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.892392 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jlp\" (UniqueName: \"kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp\") pod \"c491f7d7-7607-4605-b5fb-312493e0bebf\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.892520 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory\") pod \"c491f7d7-7607-4605-b5fb-312493e0bebf\" (UID: \"c491f7d7-7607-4605-b5fb-312493e0bebf\") " Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.897836 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp" (OuterVolumeSpecName: "kube-api-access-b4jlp") pod "c491f7d7-7607-4605-b5fb-312493e0bebf" (UID: "c491f7d7-7607-4605-b5fb-312493e0bebf"). InnerVolumeSpecName "kube-api-access-b4jlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.933155 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory" (OuterVolumeSpecName: "inventory") pod "c491f7d7-7607-4605-b5fb-312493e0bebf" (UID: "c491f7d7-7607-4605-b5fb-312493e0bebf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.941140 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c491f7d7-7607-4605-b5fb-312493e0bebf" (UID: "c491f7d7-7607-4605-b5fb-312493e0bebf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.998147 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.998194 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jlp\" (UniqueName: \"kubernetes.io/projected/c491f7d7-7607-4605-b5fb-312493e0bebf-kube-api-access-b4jlp\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:12 crc kubenswrapper[4809]: I0226 14:58:12.998213 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c491f7d7-7607-4605-b5fb-312493e0bebf-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.314633 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" event={"ID":"c491f7d7-7607-4605-b5fb-312493e0bebf","Type":"ContainerDied","Data":"2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77"} Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.314671 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ddc7766763f419264a6821f3ee0e8a5d451931e98412a17a21da8bb15cf9f77" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.314669 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b2h84" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.410244 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z"] Feb 26 14:58:13 crc kubenswrapper[4809]: E0226 14:58:13.410738 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acc1d4b-e84e-4760-a5c0-ce567be35ec1" containerName="oc" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.410754 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acc1d4b-e84e-4760-a5c0-ce567be35ec1" containerName="oc" Feb 26 14:58:13 crc kubenswrapper[4809]: E0226 14:58:13.410792 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c491f7d7-7607-4605-b5fb-312493e0bebf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.410805 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c491f7d7-7607-4605-b5fb-312493e0bebf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.411046 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c491f7d7-7607-4605-b5fb-312493e0bebf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.411070 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acc1d4b-e84e-4760-a5c0-ce567be35ec1" containerName="oc" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.411867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.417171 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.417499 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.425861 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.426080 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.436096 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z"] Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.511871 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.512263 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.512418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqgk\" (UniqueName: \"kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.614197 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.614293 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.614353 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqgk\" (UniqueName: \"kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.619084 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.620139 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.649233 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqgk\" (UniqueName: \"kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:13 crc kubenswrapper[4809]: I0226 14:58:13.731939 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:14 crc kubenswrapper[4809]: I0226 14:58:14.326672 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z"] Feb 26 14:58:14 crc kubenswrapper[4809]: W0226 14:58:14.329356 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecbd5645_f7e4_4741_9042_5d1db68de941.slice/crio-dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70 WatchSource:0}: Error finding container dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70: Status 404 returned error can't find the container with id dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70 Feb 26 14:58:15 crc kubenswrapper[4809]: I0226 14:58:15.346848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" event={"ID":"ecbd5645-f7e4-4741-9042-5d1db68de941","Type":"ContainerStarted","Data":"5ff86d195b2e06c92ddd657f830a763cf706fa1999a96b7107115e17345b1276"} Feb 26 14:58:15 crc kubenswrapper[4809]: I0226 14:58:15.347638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" event={"ID":"ecbd5645-f7e4-4741-9042-5d1db68de941","Type":"ContainerStarted","Data":"dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70"} Feb 26 14:58:15 crc kubenswrapper[4809]: I0226 14:58:15.368848 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" podStartSLOduration=1.696508214 podStartE2EDuration="2.368822619s" podCreationTimestamp="2026-02-26 14:58:13 +0000 UTC" firstStartedPulling="2026-02-26 14:58:14.3360842 +0000 UTC m=+2672.809404723" lastFinishedPulling="2026-02-26 14:58:15.008398595 +0000 UTC m=+2673.481719128" observedRunningTime="2026-02-26 14:58:15.367925164 +0000 UTC m=+2673.841245687" watchObservedRunningTime="2026-02-26 14:58:15.368822619 +0000 UTC m=+2673.842143182" Feb 26 14:58:17 crc kubenswrapper[4809]: I0226 14:58:17.256814 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:58:17 crc kubenswrapper[4809]: E0226 14:58:17.257389 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:58:25 crc kubenswrapper[4809]: I0226 14:58:25.504572 4809 generic.go:334] "Generic (PLEG): container finished" podID="ecbd5645-f7e4-4741-9042-5d1db68de941" containerID="5ff86d195b2e06c92ddd657f830a763cf706fa1999a96b7107115e17345b1276" exitCode=0 Feb 26 14:58:25 crc kubenswrapper[4809]: I0226 14:58:25.504660 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" event={"ID":"ecbd5645-f7e4-4741-9042-5d1db68de941","Type":"ContainerDied","Data":"5ff86d195b2e06c92ddd657f830a763cf706fa1999a96b7107115e17345b1276"} Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.149231 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.187457 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory\") pod \"ecbd5645-f7e4-4741-9042-5d1db68de941\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.187523 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxqgk\" (UniqueName: \"kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk\") pod \"ecbd5645-f7e4-4741-9042-5d1db68de941\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.187678 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam\") pod \"ecbd5645-f7e4-4741-9042-5d1db68de941\" (UID: \"ecbd5645-f7e4-4741-9042-5d1db68de941\") " Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.195094 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk" (OuterVolumeSpecName: "kube-api-access-bxqgk") pod "ecbd5645-f7e4-4741-9042-5d1db68de941" (UID: "ecbd5645-f7e4-4741-9042-5d1db68de941"). InnerVolumeSpecName "kube-api-access-bxqgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.241351 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory" (OuterVolumeSpecName: "inventory") pod "ecbd5645-f7e4-4741-9042-5d1db68de941" (UID: "ecbd5645-f7e4-4741-9042-5d1db68de941"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.243583 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ecbd5645-f7e4-4741-9042-5d1db68de941" (UID: "ecbd5645-f7e4-4741-9042-5d1db68de941"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.291452 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxqgk\" (UniqueName: \"kubernetes.io/projected/ecbd5645-f7e4-4741-9042-5d1db68de941-kube-api-access-bxqgk\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.291484 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.291493 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd5645-f7e4-4741-9042-5d1db68de941-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.540887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" event={"ID":"ecbd5645-f7e4-4741-9042-5d1db68de941","Type":"ContainerDied","Data":"dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70"} Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.540936 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc93073d70585324b2fd1bbed1799702d494f279df5eda7350fb3e602b85fc70" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.541054 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.626910 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2"] Feb 26 14:58:27 crc kubenswrapper[4809]: E0226 14:58:27.627653 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbd5645-f7e4-4741-9042-5d1db68de941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.627779 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbd5645-f7e4-4741-9042-5d1db68de941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.628108 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbd5645-f7e4-4741-9042-5d1db68de941" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.629099 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.675415 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.675511 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.675940 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.676342 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.676615 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.676957 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.677049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.677244 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.677400 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703373 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703681 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzj6\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703905 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703933 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.703972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704026 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704063 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704107 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704129 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704272 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.704330 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.716341 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2"] Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806561 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806739 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806859 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.806926 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzj6\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.807081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.807152 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808000 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808240 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808341 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808487 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.808600 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.810213 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.810350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.813848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.813923 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.813934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.814276 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.814791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.815303 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.815877 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.815886 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.816387 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.816426 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.816568 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.818334 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.818888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.820270 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.820653 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:27 crc kubenswrapper[4809]: I0226 14:58:27.822000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzj6\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:28 crc kubenswrapper[4809]: I0226 14:58:28.013862 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:58:28 crc kubenswrapper[4809]: I0226 14:58:28.258905 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:58:28 crc kubenswrapper[4809]: E0226 14:58:28.259550 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:58:28 crc kubenswrapper[4809]: I0226 14:58:28.646837 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2"] Feb 26 14:58:29 crc kubenswrapper[4809]: I0226 14:58:29.574952 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" event={"ID":"79e3b79c-2611-4e20-b330-c37740777890","Type":"ContainerStarted","Data":"0ba7486dcc8921de08f8ebde0c363670c0a6d77cacca8ebee27ca62e8c6307ec"} Feb 26 14:58:29 crc kubenswrapper[4809]: I0226 14:58:29.575404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" event={"ID":"79e3b79c-2611-4e20-b330-c37740777890","Type":"ContainerStarted","Data":"bb14424cf080c7a037e6ae2e25e875d16f0488a2d7c9aa7058e181c37a108920"} Feb 26 14:58:29 crc kubenswrapper[4809]: I0226 14:58:29.624854 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" podStartSLOduration=2.227232013 podStartE2EDuration="2.624826713s" podCreationTimestamp="2026-02-26 14:58:27 +0000 UTC" firstStartedPulling="2026-02-26 14:58:28.647741841 +0000 UTC m=+2687.121062404" lastFinishedPulling="2026-02-26 14:58:29.045336541 +0000 UTC m=+2687.518657104" observedRunningTime="2026-02-26 14:58:29.60659524 +0000 UTC m=+2688.079915783" watchObservedRunningTime="2026-02-26 14:58:29.624826713 +0000 UTC m=+2688.098147256" Feb 26 14:58:39 crc kubenswrapper[4809]: I0226 14:58:39.258782 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:58:39 crc kubenswrapper[4809]: E0226 14:58:39.259523 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:58:54 crc kubenswrapper[4809]: I0226 14:58:54.258370 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:58:54 crc kubenswrapper[4809]: E0226 14:58:54.259654 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:59:08 crc kubenswrapper[4809]: I0226 14:59:08.258134 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:59:08 crc kubenswrapper[4809]: E0226 14:59:08.259200 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:59:16 crc kubenswrapper[4809]: I0226 14:59:16.244782 4809 generic.go:334] "Generic (PLEG): container finished" podID="79e3b79c-2611-4e20-b330-c37740777890" containerID="0ba7486dcc8921de08f8ebde0c363670c0a6d77cacca8ebee27ca62e8c6307ec" exitCode=0 Feb 26 14:59:16 crc kubenswrapper[4809]: I0226 14:59:16.244885 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" event={"ID":"79e3b79c-2611-4e20-b330-c37740777890","Type":"ContainerDied","Data":"0ba7486dcc8921de08f8ebde0c363670c0a6d77cacca8ebee27ca62e8c6307ec"} Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.730537 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.842859 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.842916 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.842975 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.842995 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843030 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843069 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843178 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843202 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843255 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzj6\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843271 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843305 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843357 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843426 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843474 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843529 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.843555 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0\") pod \"79e3b79c-2611-4e20-b330-c37740777890\" (UID: \"79e3b79c-2611-4e20-b330-c37740777890\") " Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.853601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.857254 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.860751 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.860831 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.863218 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.876163 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.876239 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.891141 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.891152 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6" (OuterVolumeSpecName: "kube-api-access-dlzj6") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "kube-api-access-dlzj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.898525 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.898652 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.898918 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.900162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.903285 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.943159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951849 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951882 4809 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951892 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzj6\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-kube-api-access-dlzj6\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951901 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951911 4809 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951920 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951931 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951940 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951951 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951964 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951972 4809 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951982 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.951991 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/79e3b79c-2611-4e20-b330-c37740777890-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.952000 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.952020 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:17 crc kubenswrapper[4809]: I0226 14:59:17.956215 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory" (OuterVolumeSpecName: "inventory") pod "79e3b79c-2611-4e20-b330-c37740777890" (UID: "79e3b79c-2611-4e20-b330-c37740777890"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.079783 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e3b79c-2611-4e20-b330-c37740777890-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.269896 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.274806 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2" event={"ID":"79e3b79c-2611-4e20-b330-c37740777890","Type":"ContainerDied","Data":"bb14424cf080c7a037e6ae2e25e875d16f0488a2d7c9aa7058e181c37a108920"} Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.275040 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb14424cf080c7a037e6ae2e25e875d16f0488a2d7c9aa7058e181c37a108920" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.426910 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82"] Feb 26 14:59:18 crc kubenswrapper[4809]: E0226 14:59:18.427571 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e3b79c-2611-4e20-b330-c37740777890" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.427596 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e3b79c-2611-4e20-b330-c37740777890" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.427944 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e3b79c-2611-4e20-b330-c37740777890" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.429103 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.431968 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.432941 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.433113 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.433722 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.446373 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.473930 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82"] Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.595943 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.596045 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.596178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.596243 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjf2f\" (UniqueName: \"kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.596418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.699521 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.699895 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.700349 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.700616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjf2f\" (UniqueName: \"kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.700861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.702004 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.704110 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.706136 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.707118 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.721864 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjf2f\" (UniqueName: \"kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltx82\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:18 crc kubenswrapper[4809]: I0226 14:59:18.748912 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 14:59:19 crc kubenswrapper[4809]: W0226 14:59:19.357599 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3447f7c_8de1_42d8_8f51_9d78062f6dd3.slice/crio-997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f WatchSource:0}: Error finding container 997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f: Status 404 returned error can't find the container with id 997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f Feb 26 14:59:19 crc kubenswrapper[4809]: I0226 14:59:19.380272 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82"] Feb 26 14:59:20 crc kubenswrapper[4809]: I0226 14:59:20.289386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" event={"ID":"b3447f7c-8de1-42d8-8f51-9d78062f6dd3","Type":"ContainerStarted","Data":"d2c2ab81c9a4ab7cb69bcbbfb7870b2e63c30b4ccfd2aebdbda067bee794bc53"} Feb 26 14:59:20 crc kubenswrapper[4809]: I0226 14:59:20.289660 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" event={"ID":"b3447f7c-8de1-42d8-8f51-9d78062f6dd3","Type":"ContainerStarted","Data":"997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f"} Feb 26 14:59:20 crc kubenswrapper[4809]: I0226 14:59:20.356261 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" podStartSLOduration=1.84372907 podStartE2EDuration="2.356235009s" podCreationTimestamp="2026-02-26 14:59:18 +0000 UTC" firstStartedPulling="2026-02-26 14:59:19.360236143 +0000 UTC m=+2737.833556666" lastFinishedPulling="2026-02-26 14:59:19.872742072 +0000 UTC m=+2738.346062605" observedRunningTime="2026-02-26 14:59:20.342763142 +0000 UTC m=+2738.816083695" watchObservedRunningTime="2026-02-26 14:59:20.356235009 +0000 UTC m=+2738.829555562" Feb 26 14:59:23 crc kubenswrapper[4809]: I0226 14:59:23.257365 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:59:23 crc kubenswrapper[4809]: E0226 14:59:23.258457 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:59:38 crc kubenswrapper[4809]: I0226 14:59:38.257109 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:59:38 crc kubenswrapper[4809]: E0226 14:59:38.258054 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 14:59:53 crc kubenswrapper[4809]: I0226 14:59:53.257676 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 14:59:53 crc kubenswrapper[4809]: E0226 14:59:53.258552 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.173781 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535300-w5bzn"] Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.176502 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.179063 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.179213 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.179650 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.189460 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-w5bzn"] Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.276347 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b"] Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.276781 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwgf\" (UniqueName: \"kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf\") pod \"auto-csr-approver-29535300-w5bzn\" (UID: \"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77\") " pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.278272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.285702 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.286125 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.304373 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b"] Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.378910 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcwgf\" (UniqueName: \"kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf\") pod \"auto-csr-approver-29535300-w5bzn\" (UID: \"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77\") " pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.378994 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7246\" (UniqueName: \"kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.379336 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.380789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.404688 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcwgf\" (UniqueName: \"kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf\") pod \"auto-csr-approver-29535300-w5bzn\" (UID: \"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77\") " pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.483735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7246\" (UniqueName: \"kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.484089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.484182 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.485242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.488272 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.502845 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7246\" (UniqueName: \"kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246\") pod \"collect-profiles-29535300-bqd9b\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.512912 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:00 crc kubenswrapper[4809]: I0226 15:00:00.610177 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.064497 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-w5bzn"] Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.173409 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b"] Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.863259 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" event={"ID":"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77","Type":"ContainerStarted","Data":"db01720b792c5d664963a3e767a7f31faf5356816c2b713e1cfa0abd42f6b23e"} Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.865222 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" event={"ID":"19430f16-b502-4902-8fd4-d0dabd493d3d","Type":"ContainerStarted","Data":"9f1761c808b490e58720fbbf4ecc7951b78f26a4033ea73ce904abbb4a7990c3"} Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.865257 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" event={"ID":"19430f16-b502-4902-8fd4-d0dabd493d3d","Type":"ContainerStarted","Data":"66c8e19a74ada055df236e9869c860a34585e5c96b75ed26fa2a6c74b3d596df"} Feb 26 15:00:01 crc kubenswrapper[4809]: I0226 15:00:01.886760 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" podStartSLOduration=1.8867386210000001 podStartE2EDuration="1.886738621s" podCreationTimestamp="2026-02-26 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:00:01.881766178 +0000 UTC m=+2780.355086701" watchObservedRunningTime="2026-02-26 15:00:01.886738621 +0000 UTC m=+2780.360059154" Feb 26 15:00:02 crc kubenswrapper[4809]: I0226 15:00:02.883352 4809 generic.go:334] "Generic (PLEG): container finished" podID="19430f16-b502-4902-8fd4-d0dabd493d3d" containerID="9f1761c808b490e58720fbbf4ecc7951b78f26a4033ea73ce904abbb4a7990c3" exitCode=0 Feb 26 15:00:02 crc kubenswrapper[4809]: I0226 15:00:02.883438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" event={"ID":"19430f16-b502-4902-8fd4-d0dabd493d3d","Type":"ContainerDied","Data":"9f1761c808b490e58720fbbf4ecc7951b78f26a4033ea73ce904abbb4a7990c3"} Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.300520 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.387691 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume\") pod \"19430f16-b502-4902-8fd4-d0dabd493d3d\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.387854 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7246\" (UniqueName: \"kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246\") pod \"19430f16-b502-4902-8fd4-d0dabd493d3d\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.388098 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume\") pod \"19430f16-b502-4902-8fd4-d0dabd493d3d\" (UID: \"19430f16-b502-4902-8fd4-d0dabd493d3d\") " Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.389090 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "19430f16-b502-4902-8fd4-d0dabd493d3d" (UID: "19430f16-b502-4902-8fd4-d0dabd493d3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.393936 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19430f16-b502-4902-8fd4-d0dabd493d3d" (UID: "19430f16-b502-4902-8fd4-d0dabd493d3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.406732 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246" (OuterVolumeSpecName: "kube-api-access-b7246") pod "19430f16-b502-4902-8fd4-d0dabd493d3d" (UID: "19430f16-b502-4902-8fd4-d0dabd493d3d"). InnerVolumeSpecName "kube-api-access-b7246". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.490875 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19430f16-b502-4902-8fd4-d0dabd493d3d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.490914 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19430f16-b502-4902-8fd4-d0dabd493d3d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.490926 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7246\" (UniqueName: \"kubernetes.io/projected/19430f16-b502-4902-8fd4-d0dabd493d3d-kube-api-access-b7246\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.911572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" event={"ID":"19430f16-b502-4902-8fd4-d0dabd493d3d","Type":"ContainerDied","Data":"66c8e19a74ada055df236e9869c860a34585e5c96b75ed26fa2a6c74b3d596df"} Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.911616 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c8e19a74ada055df236e9869c860a34585e5c96b75ed26fa2a6c74b3d596df" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.911671 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b" Feb 26 15:00:04 crc kubenswrapper[4809]: I0226 15:00:04.990747 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h"] Feb 26 15:00:05 crc kubenswrapper[4809]: I0226 15:00:05.003566 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535255-l6b9h"] Feb 26 15:00:05 crc kubenswrapper[4809]: I0226 15:00:05.257128 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 15:00:05 crc kubenswrapper[4809]: E0226 15:00:05.257861 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:00:05 crc kubenswrapper[4809]: I0226 15:00:05.925606 4809 generic.go:334] "Generic (PLEG): container finished" podID="b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" containerID="cc26af5d93023db15c2fcd55cf44e03524d58b1bac7f1d32b4d8c95f483392ac" exitCode=0 Feb 26 15:00:05 crc kubenswrapper[4809]: I0226 15:00:05.925653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" event={"ID":"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77","Type":"ContainerDied","Data":"cc26af5d93023db15c2fcd55cf44e03524d58b1bac7f1d32b4d8c95f483392ac"} Feb 26 15:00:06 crc kubenswrapper[4809]: I0226 15:00:06.271550 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2048c3a-d91f-4ef5-93e1-41a621001c94" path="/var/lib/kubelet/pods/c2048c3a-d91f-4ef5-93e1-41a621001c94/volumes" Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.402522 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.482738 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcwgf\" (UniqueName: \"kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf\") pod \"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77\" (UID: \"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77\") " Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.506488 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf" (OuterVolumeSpecName: "kube-api-access-hcwgf") pod "b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" (UID: "b45f9bde-e9cb-46ee-b1fd-6c422bcfef77"). InnerVolumeSpecName "kube-api-access-hcwgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.587877 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcwgf\" (UniqueName: \"kubernetes.io/projected/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77-kube-api-access-hcwgf\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.952579 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" event={"ID":"b45f9bde-e9cb-46ee-b1fd-6c422bcfef77","Type":"ContainerDied","Data":"db01720b792c5d664963a3e767a7f31faf5356816c2b713e1cfa0abd42f6b23e"} Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.952646 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db01720b792c5d664963a3e767a7f31faf5356816c2b713e1cfa0abd42f6b23e" Feb 26 15:00:07 crc kubenswrapper[4809]: I0226 15:00:07.952664 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535300-w5bzn" Feb 26 15:00:08 crc kubenswrapper[4809]: I0226 15:00:08.486887 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-dn9hc"] Feb 26 15:00:08 crc kubenswrapper[4809]: I0226 15:00:08.512158 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535294-dn9hc"] Feb 26 15:00:10 crc kubenswrapper[4809]: I0226 15:00:10.273631 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e9f26d-2936-433c-b595-3762d5fdb1cb" path="/var/lib/kubelet/pods/44e9f26d-2936-433c-b595-3762d5fdb1cb/volumes" Feb 26 15:00:12 crc kubenswrapper[4809]: I0226 15:00:12.284426 4809 scope.go:117] "RemoveContainer" containerID="a54faa2e4c8eb0b43c79ebf00cd21b4daa1d50335fa793406162e3ea7b00f3bf" Feb 26 15:00:12 crc kubenswrapper[4809]: I0226 15:00:12.351934 4809 scope.go:117] "RemoveContainer" containerID="062d1e05019b76e8d1a4c7213d80a62c8cb63f00bd7d68b6a5d7f53899958740" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.146362 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:16 crc kubenswrapper[4809]: E0226 15:00:16.147554 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19430f16-b502-4902-8fd4-d0dabd493d3d" containerName="collect-profiles" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.148140 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="19430f16-b502-4902-8fd4-d0dabd493d3d" containerName="collect-profiles" Feb 26 15:00:16 crc kubenswrapper[4809]: E0226 15:00:16.148164 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" containerName="oc" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.148204 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" containerName="oc" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.148822 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="19430f16-b502-4902-8fd4-d0dabd493d3d" containerName="collect-profiles" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.148860 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" containerName="oc" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.151139 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.182647 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.231855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.232325 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.232465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvvk\" (UniqueName: \"kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.336793 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.336925 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvvk\" (UniqueName: \"kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.337198 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.337930 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.338374 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.358735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvvk\" (UniqueName: \"kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk\") pod \"redhat-marketplace-4ktf5\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:16 crc kubenswrapper[4809]: I0226 15:00:16.487828 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:17 crc kubenswrapper[4809]: I0226 15:00:17.007576 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:17 crc kubenswrapper[4809]: I0226 15:00:17.083659 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerStarted","Data":"1873f83b425c9c5d77bf0f0215d13a5559a8b748ae6f196cc586a0bbf77d37bd"} Feb 26 15:00:18 crc kubenswrapper[4809]: I0226 15:00:18.095403 4809 generic.go:334] "Generic (PLEG): container finished" podID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerID="16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d" exitCode=0 Feb 26 15:00:18 crc kubenswrapper[4809]: I0226 15:00:18.095474 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerDied","Data":"16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d"} Feb 26 15:00:19 crc kubenswrapper[4809]: I0226 15:00:19.257996 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 15:00:20 crc kubenswrapper[4809]: I0226 15:00:20.130701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321"} Feb 26 15:00:20 crc kubenswrapper[4809]: I0226 15:00:20.134471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerStarted","Data":"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa"} Feb 26 15:00:21 crc kubenswrapper[4809]: I0226 15:00:21.151551 4809 generic.go:334] "Generic (PLEG): container finished" podID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerID="016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa" exitCode=0 Feb 26 15:00:21 crc kubenswrapper[4809]: I0226 15:00:21.151674 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerDied","Data":"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa"} Feb 26 15:00:22 crc kubenswrapper[4809]: I0226 15:00:22.168637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerStarted","Data":"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0"} Feb 26 15:00:22 crc kubenswrapper[4809]: I0226 15:00:22.210307 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4ktf5" podStartSLOduration=2.699838324 podStartE2EDuration="6.210270242s" podCreationTimestamp="2026-02-26 15:00:16 +0000 UTC" firstStartedPulling="2026-02-26 15:00:18.097937639 +0000 UTC m=+2796.571258162" lastFinishedPulling="2026-02-26 15:00:21.608369517 +0000 UTC m=+2800.081690080" observedRunningTime="2026-02-26 15:00:22.19905016 +0000 UTC m=+2800.672370693" watchObservedRunningTime="2026-02-26 15:00:22.210270242 +0000 UTC m=+2800.683590795" Feb 26 15:00:26 crc kubenswrapper[4809]: I0226 15:00:26.253519 4809 generic.go:334] "Generic (PLEG): container finished" podID="b3447f7c-8de1-42d8-8f51-9d78062f6dd3" containerID="d2c2ab81c9a4ab7cb69bcbbfb7870b2e63c30b4ccfd2aebdbda067bee794bc53" exitCode=0 Feb 26 15:00:26 crc kubenswrapper[4809]: I0226 15:00:26.253611 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" event={"ID":"b3447f7c-8de1-42d8-8f51-9d78062f6dd3","Type":"ContainerDied","Data":"d2c2ab81c9a4ab7cb69bcbbfb7870b2e63c30b4ccfd2aebdbda067bee794bc53"} Feb 26 15:00:26 crc kubenswrapper[4809]: I0226 15:00:26.488291 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:26 crc kubenswrapper[4809]: I0226 15:00:26.488371 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:26 crc kubenswrapper[4809]: I0226 15:00:26.574367 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:27 crc kubenswrapper[4809]: I0226 15:00:27.395614 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:27 crc kubenswrapper[4809]: I0226 15:00:27.477397 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:27 crc kubenswrapper[4809]: I0226 15:00:27.866843 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.020001 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0\") pod \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.020088 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle\") pod \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.020235 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory\") pod \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.020277 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjf2f\" (UniqueName: \"kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f\") pod \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.020397 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam\") pod \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\" (UID: \"b3447f7c-8de1-42d8-8f51-9d78062f6dd3\") " Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.034365 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f" (OuterVolumeSpecName: "kube-api-access-xjf2f") pod "b3447f7c-8de1-42d8-8f51-9d78062f6dd3" (UID: "b3447f7c-8de1-42d8-8f51-9d78062f6dd3"). InnerVolumeSpecName "kube-api-access-xjf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.036618 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b3447f7c-8de1-42d8-8f51-9d78062f6dd3" (UID: "b3447f7c-8de1-42d8-8f51-9d78062f6dd3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.049982 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b3447f7c-8de1-42d8-8f51-9d78062f6dd3" (UID: "b3447f7c-8de1-42d8-8f51-9d78062f6dd3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.061145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3447f7c-8de1-42d8-8f51-9d78062f6dd3" (UID: "b3447f7c-8de1-42d8-8f51-9d78062f6dd3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.080105 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory" (OuterVolumeSpecName: "inventory") pod "b3447f7c-8de1-42d8-8f51-9d78062f6dd3" (UID: "b3447f7c-8de1-42d8-8f51-9d78062f6dd3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.123961 4809 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.124002 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.124068 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.124081 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjf2f\" (UniqueName: \"kubernetes.io/projected/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-kube-api-access-xjf2f\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.124096 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3447f7c-8de1-42d8-8f51-9d78062f6dd3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.300656 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" event={"ID":"b3447f7c-8de1-42d8-8f51-9d78062f6dd3","Type":"ContainerDied","Data":"997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f"} Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.300693 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997da6675530ee89bb0269bd54ab76f72831caaa0cc2c0431a5651c625464a2f" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.300697 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltx82" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.526501 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm"] Feb 26 15:00:28 crc kubenswrapper[4809]: E0226 15:00:28.528133 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3447f7c-8de1-42d8-8f51-9d78062f6dd3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.528211 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3447f7c-8de1-42d8-8f51-9d78062f6dd3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.528495 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3447f7c-8de1-42d8-8f51-9d78062f6dd3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.529383 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.532498 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.532849 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.533086 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.533345 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.534780 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.534980 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.558715 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm"] Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643623 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643725 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643813 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntllp\" (UniqueName: \"kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643831 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643877 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.643915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746710 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746807 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntllp\" (UniqueName: \"kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746888 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.746927 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.752864 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.753112 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.753154 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.753539 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.758673 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.762712 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntllp\" (UniqueName: \"kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:28 crc kubenswrapper[4809]: I0226 15:00:28.848949 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.314795 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4ktf5" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="registry-server" containerID="cri-o://61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0" gracePeriod=2 Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.465604 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm"] Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.723393 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.874953 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities\") pod \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.875118 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content\") pod \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.875178 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cvvk\" (UniqueName: \"kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk\") pod \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\" (UID: \"68c54c7b-c554-4cc2-b284-b1a7f5e2682c\") " Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.876396 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities" (OuterVolumeSpecName: "utilities") pod "68c54c7b-c554-4cc2-b284-b1a7f5e2682c" (UID: "68c54c7b-c554-4cc2-b284-b1a7f5e2682c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.884562 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk" (OuterVolumeSpecName: "kube-api-access-9cvvk") pod "68c54c7b-c554-4cc2-b284-b1a7f5e2682c" (UID: "68c54c7b-c554-4cc2-b284-b1a7f5e2682c"). InnerVolumeSpecName "kube-api-access-9cvvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.899147 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68c54c7b-c554-4cc2-b284-b1a7f5e2682c" (UID: "68c54c7b-c554-4cc2-b284-b1a7f5e2682c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.978299 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.978345 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:29 crc kubenswrapper[4809]: I0226 15:00:29.978363 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cvvk\" (UniqueName: \"kubernetes.io/projected/68c54c7b-c554-4cc2-b284-b1a7f5e2682c-kube-api-access-9cvvk\") on node \"crc\" DevicePath \"\"" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.329003 4809 generic.go:334] "Generic (PLEG): container finished" podID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerID="61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0" exitCode=0 Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.329082 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerDied","Data":"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0"} Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.329125 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4ktf5" event={"ID":"68c54c7b-c554-4cc2-b284-b1a7f5e2682c","Type":"ContainerDied","Data":"1873f83b425c9c5d77bf0f0215d13a5559a8b748ae6f196cc586a0bbf77d37bd"} Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.329146 4809 scope.go:117] "RemoveContainer" containerID="61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.329304 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4ktf5" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.332246 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" event={"ID":"26907d88-fa6b-43f0-b59a-d8ce3a779fd4","Type":"ContainerStarted","Data":"10b025aec437b82f34183e20e07623bbb69360bf0ae5c17575d683ac77803721"} Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.371063 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.373214 4809 scope.go:117] "RemoveContainer" containerID="016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.387985 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4ktf5"] Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.593794 4809 scope.go:117] "RemoveContainer" containerID="16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.636668 4809 scope.go:117] "RemoveContainer" containerID="61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0" Feb 26 15:00:30 crc kubenswrapper[4809]: E0226 15:00:30.637574 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0\": container with ID starting with 61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0 not found: ID does not exist" containerID="61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.637630 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0"} err="failed to get container status \"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0\": rpc error: code = NotFound desc = could not find container \"61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0\": container with ID starting with 61f89033083cad1fcb5cbc0236e4bccc7cb24453eaa4b66e7ba5cc1297c3a5c0 not found: ID does not exist" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.637656 4809 scope.go:117] "RemoveContainer" containerID="016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa" Feb 26 15:00:30 crc kubenswrapper[4809]: E0226 15:00:30.637947 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa\": container with ID starting with 016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa not found: ID does not exist" containerID="016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.637983 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa"} err="failed to get container status \"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa\": rpc error: code = NotFound desc = could not find container \"016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa\": container with ID starting with 016712390a4eaada43d410374af53977a4644c981f4543f2bc8afad42857abfa not found: ID does not exist" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.637996 4809 scope.go:117] "RemoveContainer" containerID="16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d" Feb 26 15:00:30 crc kubenswrapper[4809]: E0226 15:00:30.639172 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d\": container with ID starting with 16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d not found: ID does not exist" containerID="16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d" Feb 26 15:00:30 crc kubenswrapper[4809]: I0226 15:00:30.639222 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d"} err="failed to get container status \"16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d\": rpc error: code = NotFound desc = could not find container \"16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d\": container with ID starting with 16b1c3021e5a980e889fd2c9e04eed9148ff4197dfc1cb600fd93547c41fb75d not found: ID does not exist" Feb 26 15:00:31 crc kubenswrapper[4809]: I0226 15:00:31.351751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" event={"ID":"26907d88-fa6b-43f0-b59a-d8ce3a779fd4","Type":"ContainerStarted","Data":"975ecf2f83ed0bae62ffb0edeecf015f963e528d64931167b207c9219b96c3a0"} Feb 26 15:00:31 crc kubenswrapper[4809]: I0226 15:00:31.399907 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" podStartSLOduration=2.4922703410000002 podStartE2EDuration="3.399877618s" podCreationTimestamp="2026-02-26 15:00:28 +0000 UTC" firstStartedPulling="2026-02-26 15:00:29.46645585 +0000 UTC m=+2807.939776393" lastFinishedPulling="2026-02-26 15:00:30.374063147 +0000 UTC m=+2808.847383670" observedRunningTime="2026-02-26 15:00:31.382216241 +0000 UTC m=+2809.855536764" watchObservedRunningTime="2026-02-26 15:00:31.399877618 +0000 UTC m=+2809.873198151" Feb 26 15:00:32 crc kubenswrapper[4809]: I0226 15:00:32.285403 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" path="/var/lib/kubelet/pods/68c54c7b-c554-4cc2-b284-b1a7f5e2682c/volumes" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.178530 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535301-62cv2"] Feb 26 15:01:00 crc kubenswrapper[4809]: E0226 15:01:00.179521 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="extract-content" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.179537 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="extract-content" Feb 26 15:01:00 crc kubenswrapper[4809]: E0226 15:01:00.179567 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="registry-server" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.179575 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="registry-server" Feb 26 15:01:00 crc kubenswrapper[4809]: E0226 15:01:00.179589 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="extract-utilities" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.179598 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="extract-utilities" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.179885 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c54c7b-c554-4cc2-b284-b1a7f5e2682c" containerName="registry-server" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.180990 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.194723 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535301-62cv2"] Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.326804 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.326865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.326902 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjfb\" (UniqueName: \"kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.326954 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.432409 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.432481 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.432554 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjfb\" (UniqueName: \"kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.432652 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.440351 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.440925 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.443460 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.463086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjfb\" (UniqueName: \"kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb\") pod \"keystone-cron-29535301-62cv2\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:00 crc kubenswrapper[4809]: I0226 15:01:00.515071 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:01 crc kubenswrapper[4809]: I0226 15:01:01.024554 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535301-62cv2"] Feb 26 15:01:01 crc kubenswrapper[4809]: I0226 15:01:01.723133 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-62cv2" event={"ID":"7d19d3f1-87e6-4318-be1c-2065f711f4da","Type":"ContainerStarted","Data":"dc7ac0a4033ed552010a631c610c35fb28d8631126d92882e33b86d9e2b82750"} Feb 26 15:01:01 crc kubenswrapper[4809]: I0226 15:01:01.723480 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-62cv2" event={"ID":"7d19d3f1-87e6-4318-be1c-2065f711f4da","Type":"ContainerStarted","Data":"7026138300fd8ad56f383e431264fd091fa3082e69c0a4979414a4693071df4d"} Feb 26 15:01:01 crc kubenswrapper[4809]: I0226 15:01:01.754582 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535301-62cv2" podStartSLOduration=1.754562762 podStartE2EDuration="1.754562762s" podCreationTimestamp="2026-02-26 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:01:01.748073127 +0000 UTC m=+2840.221393660" watchObservedRunningTime="2026-02-26 15:01:01.754562762 +0000 UTC m=+2840.227883295" Feb 26 15:01:04 crc kubenswrapper[4809]: I0226 15:01:04.758541 4809 generic.go:334] "Generic (PLEG): container finished" podID="7d19d3f1-87e6-4318-be1c-2065f711f4da" containerID="dc7ac0a4033ed552010a631c610c35fb28d8631126d92882e33b86d9e2b82750" exitCode=0 Feb 26 15:01:04 crc kubenswrapper[4809]: I0226 15:01:04.758657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-62cv2" event={"ID":"7d19d3f1-87e6-4318-be1c-2065f711f4da","Type":"ContainerDied","Data":"dc7ac0a4033ed552010a631c610c35fb28d8631126d92882e33b86d9e2b82750"} Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.315823 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.419393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data\") pod \"7d19d3f1-87e6-4318-be1c-2065f711f4da\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.420271 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle\") pod \"7d19d3f1-87e6-4318-be1c-2065f711f4da\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.420562 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwjfb\" (UniqueName: \"kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb\") pod \"7d19d3f1-87e6-4318-be1c-2065f711f4da\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.420808 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys\") pod \"7d19d3f1-87e6-4318-be1c-2065f711f4da\" (UID: \"7d19d3f1-87e6-4318-be1c-2065f711f4da\") " Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.429399 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb" (OuterVolumeSpecName: "kube-api-access-xwjfb") pod "7d19d3f1-87e6-4318-be1c-2065f711f4da" (UID: "7d19d3f1-87e6-4318-be1c-2065f711f4da"). InnerVolumeSpecName "kube-api-access-xwjfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.433542 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d19d3f1-87e6-4318-be1c-2065f711f4da" (UID: "7d19d3f1-87e6-4318-be1c-2065f711f4da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.474161 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d19d3f1-87e6-4318-be1c-2065f711f4da" (UID: "7d19d3f1-87e6-4318-be1c-2065f711f4da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.494509 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data" (OuterVolumeSpecName: "config-data") pod "7d19d3f1-87e6-4318-be1c-2065f711f4da" (UID: "7d19d3f1-87e6-4318-be1c-2065f711f4da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.543672 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.543709 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.543722 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwjfb\" (UniqueName: \"kubernetes.io/projected/7d19d3f1-87e6-4318-be1c-2065f711f4da-kube-api-access-xwjfb\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.543731 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d19d3f1-87e6-4318-be1c-2065f711f4da-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.780895 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535301-62cv2" event={"ID":"7d19d3f1-87e6-4318-be1c-2065f711f4da","Type":"ContainerDied","Data":"7026138300fd8ad56f383e431264fd091fa3082e69c0a4979414a4693071df4d"} Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.780960 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7026138300fd8ad56f383e431264fd091fa3082e69c0a4979414a4693071df4d" Feb 26 15:01:06 crc kubenswrapper[4809]: I0226 15:01:06.781037 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535301-62cv2" Feb 26 15:01:20 crc kubenswrapper[4809]: I0226 15:01:20.941276 4809 generic.go:334] "Generic (PLEG): container finished" podID="26907d88-fa6b-43f0-b59a-d8ce3a779fd4" containerID="975ecf2f83ed0bae62ffb0edeecf015f963e528d64931167b207c9219b96c3a0" exitCode=0 Feb 26 15:01:20 crc kubenswrapper[4809]: I0226 15:01:20.941417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" event={"ID":"26907d88-fa6b-43f0-b59a-d8ce3a779fd4","Type":"ContainerDied","Data":"975ecf2f83ed0bae62ffb0edeecf015f963e528d64931167b207c9219b96c3a0"} Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.482794 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.526823 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.527194 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntllp\" (UniqueName: \"kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.527252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.527312 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.527335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.527373 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\" (UID: \"26907d88-fa6b-43f0-b59a-d8ce3a779fd4\") " Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.532215 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp" (OuterVolumeSpecName: "kube-api-access-ntllp") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "kube-api-access-ntllp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.537106 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.592528 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.630176 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.632731 4809 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.632771 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntllp\" (UniqueName: \"kubernetes.io/projected/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-kube-api-access-ntllp\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.632791 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.632804 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.633316 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory" (OuterVolumeSpecName: "inventory") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.656409 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26907d88-fa6b-43f0-b59a-d8ce3a779fd4" (UID: "26907d88-fa6b-43f0-b59a-d8ce3a779fd4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.736625 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.736681 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26907d88-fa6b-43f0-b59a-d8ce3a779fd4-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.968086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" event={"ID":"26907d88-fa6b-43f0-b59a-d8ce3a779fd4","Type":"ContainerDied","Data":"10b025aec437b82f34183e20e07623bbb69360bf0ae5c17575d683ac77803721"} Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.968125 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b025aec437b82f34183e20e07623bbb69360bf0ae5c17575d683ac77803721" Feb 26 15:01:22 crc kubenswrapper[4809]: I0226 15:01:22.968136 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.105382 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf"] Feb 26 15:01:23 crc kubenswrapper[4809]: E0226 15:01:23.105973 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26907d88-fa6b-43f0-b59a-d8ce3a779fd4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.105993 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="26907d88-fa6b-43f0-b59a-d8ce3a779fd4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 15:01:23 crc kubenswrapper[4809]: E0226 15:01:23.106056 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d19d3f1-87e6-4318-be1c-2065f711f4da" containerName="keystone-cron" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.106065 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d19d3f1-87e6-4318-be1c-2065f711f4da" containerName="keystone-cron" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.106307 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d19d3f1-87e6-4318-be1c-2065f711f4da" containerName="keystone-cron" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.106330 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="26907d88-fa6b-43f0-b59a-d8ce3a779fd4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.107152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.109652 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.109923 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.110623 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.110741 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.111168 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.126249 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf"] Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.146648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.147100 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxh8\" (UniqueName: \"kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.147192 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.147331 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.147380 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.249577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.249650 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.249734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdxh8\" (UniqueName: \"kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.249792 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.249896 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.255243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.255310 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.255444 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.256431 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.265666 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdxh8\" (UniqueName: \"kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8vngf\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:23 crc kubenswrapper[4809]: I0226 15:01:23.455064 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:01:24 crc kubenswrapper[4809]: I0226 15:01:24.115953 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf"] Feb 26 15:01:24 crc kubenswrapper[4809]: I0226 15:01:24.120282 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:01:24 crc kubenswrapper[4809]: I0226 15:01:24.996536 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" event={"ID":"38a4f820-36f9-46c4-b55e-bee9f76ddc4b","Type":"ContainerStarted","Data":"b25e019d89538217cdd5530717a6ef81d9079f2bc3a075b79de48ac4e8b3d5e0"} Feb 26 15:01:26 crc kubenswrapper[4809]: I0226 15:01:26.019434 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" event={"ID":"38a4f820-36f9-46c4-b55e-bee9f76ddc4b","Type":"ContainerStarted","Data":"f44e458840964a3b47a56fc2b5d2657c0e4a65dad532e95e0cb256e82364121e"} Feb 26 15:01:26 crc kubenswrapper[4809]: I0226 15:01:26.048499 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" podStartSLOduration=2.473385613 podStartE2EDuration="3.048468392s" podCreationTimestamp="2026-02-26 15:01:23 +0000 UTC" firstStartedPulling="2026-02-26 15:01:24.120037318 +0000 UTC m=+2862.593357841" lastFinishedPulling="2026-02-26 15:01:24.695120057 +0000 UTC m=+2863.168440620" observedRunningTime="2026-02-26 15:01:26.045367263 +0000 UTC m=+2864.518687796" watchObservedRunningTime="2026-02-26 15:01:26.048468392 +0000 UTC m=+2864.521788945" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.168396 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535302-cwcjk"] Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.172825 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.178305 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.182539 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.182552 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.183243 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-cwcjk"] Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.268455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n84r\" (UniqueName: \"kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r\") pod \"auto-csr-approver-29535302-cwcjk\" (UID: \"519243e5-18f0-4642-a293-6d0ec1b7ef3c\") " pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.372695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n84r\" (UniqueName: \"kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r\") pod \"auto-csr-approver-29535302-cwcjk\" (UID: \"519243e5-18f0-4642-a293-6d0ec1b7ef3c\") " pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.413973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n84r\" (UniqueName: \"kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r\") pod \"auto-csr-approver-29535302-cwcjk\" (UID: \"519243e5-18f0-4642-a293-6d0ec1b7ef3c\") " pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:00 crc kubenswrapper[4809]: I0226 15:02:00.509945 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:01 crc kubenswrapper[4809]: I0226 15:02:01.015372 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-cwcjk"] Feb 26 15:02:01 crc kubenswrapper[4809]: I0226 15:02:01.483906 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" event={"ID":"519243e5-18f0-4642-a293-6d0ec1b7ef3c","Type":"ContainerStarted","Data":"169d37507d1ebe441bd87a35f382734e87ea875c34797e81b5d23aeba7af1cd4"} Feb 26 15:02:02 crc kubenswrapper[4809]: I0226 15:02:02.506560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" event={"ID":"519243e5-18f0-4642-a293-6d0ec1b7ef3c","Type":"ContainerStarted","Data":"b40c5272e41cc656217fa75e155b8d1e773894472bdaf2eaec17fa1d2008b75e"} Feb 26 15:02:02 crc kubenswrapper[4809]: I0226 15:02:02.528883 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" podStartSLOduration=1.4821285419999999 podStartE2EDuration="2.528862781s" podCreationTimestamp="2026-02-26 15:02:00 +0000 UTC" firstStartedPulling="2026-02-26 15:02:01.032876284 +0000 UTC m=+2899.506196827" lastFinishedPulling="2026-02-26 15:02:02.079610543 +0000 UTC m=+2900.552931066" observedRunningTime="2026-02-26 15:02:02.521808219 +0000 UTC m=+2900.995128752" watchObservedRunningTime="2026-02-26 15:02:02.528862781 +0000 UTC m=+2901.002183304" Feb 26 15:02:03 crc kubenswrapper[4809]: I0226 15:02:03.522098 4809 generic.go:334] "Generic (PLEG): container finished" podID="519243e5-18f0-4642-a293-6d0ec1b7ef3c" containerID="b40c5272e41cc656217fa75e155b8d1e773894472bdaf2eaec17fa1d2008b75e" exitCode=0 Feb 26 15:02:03 crc kubenswrapper[4809]: I0226 15:02:03.522138 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" event={"ID":"519243e5-18f0-4642-a293-6d0ec1b7ef3c","Type":"ContainerDied","Data":"b40c5272e41cc656217fa75e155b8d1e773894472bdaf2eaec17fa1d2008b75e"} Feb 26 15:02:04 crc kubenswrapper[4809]: I0226 15:02:04.955456 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.118342 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n84r\" (UniqueName: \"kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r\") pod \"519243e5-18f0-4642-a293-6d0ec1b7ef3c\" (UID: \"519243e5-18f0-4642-a293-6d0ec1b7ef3c\") " Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.125078 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r" (OuterVolumeSpecName: "kube-api-access-7n84r") pod "519243e5-18f0-4642-a293-6d0ec1b7ef3c" (UID: "519243e5-18f0-4642-a293-6d0ec1b7ef3c"). InnerVolumeSpecName "kube-api-access-7n84r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.221764 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n84r\" (UniqueName: \"kubernetes.io/projected/519243e5-18f0-4642-a293-6d0ec1b7ef3c-kube-api-access-7n84r\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.355690 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-mvh7b"] Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.368652 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535296-mvh7b"] Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.549505 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" event={"ID":"519243e5-18f0-4642-a293-6d0ec1b7ef3c","Type":"ContainerDied","Data":"169d37507d1ebe441bd87a35f382734e87ea875c34797e81b5d23aeba7af1cd4"} Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.549555 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169d37507d1ebe441bd87a35f382734e87ea875c34797e81b5d23aeba7af1cd4" Feb 26 15:02:05 crc kubenswrapper[4809]: I0226 15:02:05.549617 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535302-cwcjk" Feb 26 15:02:06 crc kubenswrapper[4809]: I0226 15:02:06.276404 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29df1fbc-1739-4d79-a692-de2ca9570d28" path="/var/lib/kubelet/pods/29df1fbc-1739-4d79-a692-de2ca9570d28/volumes" Feb 26 15:02:12 crc kubenswrapper[4809]: I0226 15:02:12.557743 4809 scope.go:117] "RemoveContainer" containerID="e1034ee5383864858fa9cbd9672825d3108fabe8f72d30d63718e78d4d464096" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.543231 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:30 crc kubenswrapper[4809]: E0226 15:02:30.544750 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519243e5-18f0-4642-a293-6d0ec1b7ef3c" containerName="oc" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.544777 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="519243e5-18f0-4642-a293-6d0ec1b7ef3c" containerName="oc" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.545345 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="519243e5-18f0-4642-a293-6d0ec1b7ef3c" containerName="oc" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.548449 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.554712 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.656220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.656344 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.656422 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qm6\" (UniqueName: \"kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.759263 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.759329 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qm6\" (UniqueName: \"kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.759519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.759999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.760162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.788341 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qm6\" (UniqueName: \"kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6\") pod \"community-operators-x4g69\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:30 crc kubenswrapper[4809]: I0226 15:02:30.881966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:31 crc kubenswrapper[4809]: I0226 15:02:31.418158 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:31 crc kubenswrapper[4809]: I0226 15:02:31.905676 4809 generic.go:334] "Generic (PLEG): container finished" podID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerID="a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4" exitCode=0 Feb 26 15:02:31 crc kubenswrapper[4809]: I0226 15:02:31.905772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerDied","Data":"a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4"} Feb 26 15:02:31 crc kubenswrapper[4809]: I0226 15:02:31.906072 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerStarted","Data":"a5f5de49e545845b16c7d87fcb294900c11f0f6331e03ee724f2d12fac259a98"} Feb 26 15:02:33 crc kubenswrapper[4809]: I0226 15:02:33.932702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerStarted","Data":"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e"} Feb 26 15:02:34 crc kubenswrapper[4809]: I0226 15:02:34.953374 4809 generic.go:334] "Generic (PLEG): container finished" podID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerID="cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e" exitCode=0 Feb 26 15:02:34 crc kubenswrapper[4809]: I0226 15:02:34.953710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerDied","Data":"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e"} Feb 26 15:02:36 crc kubenswrapper[4809]: I0226 15:02:36.985164 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerStarted","Data":"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc"} Feb 26 15:02:37 crc kubenswrapper[4809]: I0226 15:02:37.011414 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4g69" podStartSLOduration=2.981986084 podStartE2EDuration="7.01139502s" podCreationTimestamp="2026-02-26 15:02:30 +0000 UTC" firstStartedPulling="2026-02-26 15:02:31.911471594 +0000 UTC m=+2930.384792117" lastFinishedPulling="2026-02-26 15:02:35.9408805 +0000 UTC m=+2934.414201053" observedRunningTime="2026-02-26 15:02:37.006798998 +0000 UTC m=+2935.480119531" watchObservedRunningTime="2026-02-26 15:02:37.01139502 +0000 UTC m=+2935.484715543" Feb 26 15:02:40 crc kubenswrapper[4809]: I0226 15:02:40.882232 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:40 crc kubenswrapper[4809]: I0226 15:02:40.882832 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:40 crc kubenswrapper[4809]: I0226 15:02:40.971365 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:41 crc kubenswrapper[4809]: I0226 15:02:41.114810 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:41 crc kubenswrapper[4809]: I0226 15:02:41.211444 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:41 crc kubenswrapper[4809]: I0226 15:02:41.794791 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:02:41 crc kubenswrapper[4809]: I0226 15:02:41.794854 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.078570 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4g69" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="registry-server" containerID="cri-o://83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc" gracePeriod=2 Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.735153 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.867879 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities\") pod \"8698a393-5b22-40d2-a88d-df1aedd99a34\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.867922 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content\") pod \"8698a393-5b22-40d2-a88d-df1aedd99a34\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.868052 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qm6\" (UniqueName: \"kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6\") pod \"8698a393-5b22-40d2-a88d-df1aedd99a34\" (UID: \"8698a393-5b22-40d2-a88d-df1aedd99a34\") " Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.870133 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities" (OuterVolumeSpecName: "utilities") pod "8698a393-5b22-40d2-a88d-df1aedd99a34" (UID: "8698a393-5b22-40d2-a88d-df1aedd99a34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.877857 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6" (OuterVolumeSpecName: "kube-api-access-27qm6") pod "8698a393-5b22-40d2-a88d-df1aedd99a34" (UID: "8698a393-5b22-40d2-a88d-df1aedd99a34"). InnerVolumeSpecName "kube-api-access-27qm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.956933 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8698a393-5b22-40d2-a88d-df1aedd99a34" (UID: "8698a393-5b22-40d2-a88d-df1aedd99a34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.971341 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.971393 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8698a393-5b22-40d2-a88d-df1aedd99a34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:43 crc kubenswrapper[4809]: I0226 15:02:43.971419 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qm6\" (UniqueName: \"kubernetes.io/projected/8698a393-5b22-40d2-a88d-df1aedd99a34-kube-api-access-27qm6\") on node \"crc\" DevicePath \"\"" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.094433 4809 generic.go:334] "Generic (PLEG): container finished" podID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerID="83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc" exitCode=0 Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.094497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerDied","Data":"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc"} Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.094513 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4g69" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.094549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4g69" event={"ID":"8698a393-5b22-40d2-a88d-df1aedd99a34","Type":"ContainerDied","Data":"a5f5de49e545845b16c7d87fcb294900c11f0f6331e03ee724f2d12fac259a98"} Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.094584 4809 scope.go:117] "RemoveContainer" containerID="83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.123853 4809 scope.go:117] "RemoveContainer" containerID="cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.152061 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.161641 4809 scope.go:117] "RemoveContainer" containerID="a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.211641 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4g69"] Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.238991 4809 scope.go:117] "RemoveContainer" containerID="83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc" Feb 26 15:02:44 crc kubenswrapper[4809]: E0226 15:02:44.239243 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc\": container with ID starting with 83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc not found: ID does not exist" containerID="83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.239276 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc"} err="failed to get container status \"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc\": rpc error: code = NotFound desc = could not find container \"83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc\": container with ID starting with 83a3751eae5dec557b404066aae20fd3fd60c5bdfdf6e89f65d18613d9d71cbc not found: ID does not exist" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.239298 4809 scope.go:117] "RemoveContainer" containerID="cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e" Feb 26 15:02:44 crc kubenswrapper[4809]: E0226 15:02:44.239473 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e\": container with ID starting with cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e not found: ID does not exist" containerID="cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.239509 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e"} err="failed to get container status \"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e\": rpc error: code = NotFound desc = could not find container \"cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e\": container with ID starting with cb4fd3338d0dde4fccd7a2ecdff6ca831a16d31bebc0435f7fefb0f79599898e not found: ID does not exist" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.239535 4809 scope.go:117] "RemoveContainer" containerID="a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4" Feb 26 15:02:44 crc kubenswrapper[4809]: E0226 15:02:44.239663 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4\": container with ID starting with a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4 not found: ID does not exist" containerID="a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.239683 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4"} err="failed to get container status \"a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4\": rpc error: code = NotFound desc = could not find container \"a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4\": container with ID starting with a7a8e5e202fbdd3b85a9d6afbc690ed0b5b5b5f02b13fd1d4313ce3230a1bcc4 not found: ID does not exist" Feb 26 15:02:44 crc kubenswrapper[4809]: I0226 15:02:44.269935 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" path="/var/lib/kubelet/pods/8698a393-5b22-40d2-a88d-df1aedd99a34/volumes" Feb 26 15:03:11 crc kubenswrapper[4809]: I0226 15:03:11.794812 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:03:11 crc kubenswrapper[4809]: I0226 15:03:11.795527 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.246576 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:35 crc kubenswrapper[4809]: E0226 15:03:35.247643 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="extract-utilities" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.247663 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="extract-utilities" Feb 26 15:03:35 crc kubenswrapper[4809]: E0226 15:03:35.247728 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="registry-server" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.247737 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="registry-server" Feb 26 15:03:35 crc kubenswrapper[4809]: E0226 15:03:35.247762 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="extract-content" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.247773 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="extract-content" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.248106 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8698a393-5b22-40d2-a88d-df1aedd99a34" containerName="registry-server" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.251522 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.261757 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.415760 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.415922 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.415987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvtv\" (UniqueName: \"kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.518496 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.518622 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.518663 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jvtv\" (UniqueName: \"kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.519065 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.519321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.543719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jvtv\" (UniqueName: \"kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv\") pod \"certified-operators-wj8jr\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:35 crc kubenswrapper[4809]: I0226 15:03:35.587996 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:36 crc kubenswrapper[4809]: I0226 15:03:36.143052 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:36 crc kubenswrapper[4809]: I0226 15:03:36.885925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerStarted","Data":"f4abd8bdb3579e161ec992a1149aa40d330757ed3833cf2832fa028acb890100"} Feb 26 15:03:37 crc kubenswrapper[4809]: I0226 15:03:37.899990 4809 generic.go:334] "Generic (PLEG): container finished" podID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerID="7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829" exitCode=0 Feb 26 15:03:37 crc kubenswrapper[4809]: I0226 15:03:37.900067 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerDied","Data":"7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829"} Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.793720 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.794574 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.794655 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.796492 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.796638 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321" gracePeriod=600 Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.951681 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321" exitCode=0 Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.951755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321"} Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.951787 4809 scope.go:117] "RemoveContainer" containerID="722b8cbdae16f5f50a59d06293e4f16fe535cd9b30ebd06df14c66c851faad79" Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.955042 4809 generic.go:334] "Generic (PLEG): container finished" podID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerID="b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0" exitCode=0 Feb 26 15:03:41 crc kubenswrapper[4809]: I0226 15:03:41.955071 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerDied","Data":"b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0"} Feb 26 15:03:42 crc kubenswrapper[4809]: I0226 15:03:42.970047 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c"} Feb 26 15:03:43 crc kubenswrapper[4809]: I0226 15:03:43.991430 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerStarted","Data":"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9"} Feb 26 15:03:44 crc kubenswrapper[4809]: I0226 15:03:44.027064 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wj8jr" podStartSLOduration=3.531724089 podStartE2EDuration="9.027043391s" podCreationTimestamp="2026-02-26 15:03:35 +0000 UTC" firstStartedPulling="2026-02-26 15:03:37.902113789 +0000 UTC m=+2996.375434312" lastFinishedPulling="2026-02-26 15:03:43.397433081 +0000 UTC m=+3001.870753614" observedRunningTime="2026-02-26 15:03:44.018190588 +0000 UTC m=+3002.491511111" watchObservedRunningTime="2026-02-26 15:03:44.027043391 +0000 UTC m=+3002.500363914" Feb 26 15:03:45 crc kubenswrapper[4809]: I0226 15:03:45.589731 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:45 crc kubenswrapper[4809]: I0226 15:03:45.590156 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:46 crc kubenswrapper[4809]: I0226 15:03:46.678694 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wj8jr" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="registry-server" probeResult="failure" output=< Feb 26 15:03:46 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:03:46 crc kubenswrapper[4809]: > Feb 26 15:03:55 crc kubenswrapper[4809]: I0226 15:03:55.663419 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:55 crc kubenswrapper[4809]: I0226 15:03:55.733484 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:55 crc kubenswrapper[4809]: I0226 15:03:55.900154 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.161095 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wj8jr" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="registry-server" containerID="cri-o://b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9" gracePeriod=2 Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.763191 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.890290 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jvtv\" (UniqueName: \"kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv\") pod \"fd95f6be-905f-4234-a7a3-e51249d8a393\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.890647 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content\") pod \"fd95f6be-905f-4234-a7a3-e51249d8a393\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.890748 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities\") pod \"fd95f6be-905f-4234-a7a3-e51249d8a393\" (UID: \"fd95f6be-905f-4234-a7a3-e51249d8a393\") " Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.892351 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities" (OuterVolumeSpecName: "utilities") pod "fd95f6be-905f-4234-a7a3-e51249d8a393" (UID: "fd95f6be-905f-4234-a7a3-e51249d8a393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.897862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv" (OuterVolumeSpecName: "kube-api-access-2jvtv") pod "fd95f6be-905f-4234-a7a3-e51249d8a393" (UID: "fd95f6be-905f-4234-a7a3-e51249d8a393"). InnerVolumeSpecName "kube-api-access-2jvtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.947653 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd95f6be-905f-4234-a7a3-e51249d8a393" (UID: "fd95f6be-905f-4234-a7a3-e51249d8a393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.993191 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.993219 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd95f6be-905f-4234-a7a3-e51249d8a393-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:57 crc kubenswrapper[4809]: I0226 15:03:57.993229 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jvtv\" (UniqueName: \"kubernetes.io/projected/fd95f6be-905f-4234-a7a3-e51249d8a393-kube-api-access-2jvtv\") on node \"crc\" DevicePath \"\"" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.173781 4809 generic.go:334] "Generic (PLEG): container finished" podID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerID="b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9" exitCode=0 Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.173852 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj8jr" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.173835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerDied","Data":"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9"} Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.173997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj8jr" event={"ID":"fd95f6be-905f-4234-a7a3-e51249d8a393","Type":"ContainerDied","Data":"f4abd8bdb3579e161ec992a1149aa40d330757ed3833cf2832fa028acb890100"} Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.174050 4809 scope.go:117] "RemoveContainer" containerID="b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.203609 4809 scope.go:117] "RemoveContainer" containerID="b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.212892 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.250548 4809 scope.go:117] "RemoveContainer" containerID="7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.280986 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wj8jr"] Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.290387 4809 scope.go:117] "RemoveContainer" containerID="b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9" Feb 26 15:03:58 crc kubenswrapper[4809]: E0226 15:03:58.290786 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9\": container with ID starting with b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9 not found: ID does not exist" containerID="b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.290836 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9"} err="failed to get container status \"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9\": rpc error: code = NotFound desc = could not find container \"b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9\": container with ID starting with b08e8c72885aa53949d7b0bb9d08710fb09ec1cf458553dbbb44cb9cb3aa78b9 not found: ID does not exist" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.290869 4809 scope.go:117] "RemoveContainer" containerID="b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0" Feb 26 15:03:58 crc kubenswrapper[4809]: E0226 15:03:58.291204 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0\": container with ID starting with b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0 not found: ID does not exist" containerID="b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.291242 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0"} err="failed to get container status \"b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0\": rpc error: code = NotFound desc = could not find container \"b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0\": container with ID starting with b65e34f3f37c02ec4e4314d8f12e547f9043fa883ab38fa8edd79e5b5bfa95c0 not found: ID does not exist" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.291266 4809 scope.go:117] "RemoveContainer" containerID="7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829" Feb 26 15:03:58 crc kubenswrapper[4809]: E0226 15:03:58.291528 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829\": container with ID starting with 7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829 not found: ID does not exist" containerID="7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829" Feb 26 15:03:58 crc kubenswrapper[4809]: I0226 15:03:58.291569 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829"} err="failed to get container status \"7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829\": rpc error: code = NotFound desc = could not find container \"7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829\": container with ID starting with 7b6fe7f4e655183c23f8af62008622e0efbaeac459a67df33d4cb2ecca60a829 not found: ID does not exist" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.162938 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535304-6hslg"] Feb 26 15:04:00 crc kubenswrapper[4809]: E0226 15:04:00.163847 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="registry-server" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.163860 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="registry-server" Feb 26 15:04:00 crc kubenswrapper[4809]: E0226 15:04:00.163883 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="extract-content" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.163890 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="extract-content" Feb 26 15:04:00 crc kubenswrapper[4809]: E0226 15:04:00.163921 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="extract-utilities" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.163927 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="extract-utilities" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.164225 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" containerName="registry-server" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.165105 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.170646 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.170811 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.171069 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.180910 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-6hslg"] Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.273376 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd95f6be-905f-4234-a7a3-e51249d8a393" path="/var/lib/kubelet/pods/fd95f6be-905f-4234-a7a3-e51249d8a393/volumes" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.273545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9629\" (UniqueName: \"kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629\") pod \"auto-csr-approver-29535304-6hslg\" (UID: \"11720e6e-725b-454b-b9b1-3fffeaa744e2\") " pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.377361 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9629\" (UniqueName: \"kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629\") pod \"auto-csr-approver-29535304-6hslg\" (UID: \"11720e6e-725b-454b-b9b1-3fffeaa744e2\") " pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.404187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9629\" (UniqueName: \"kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629\") pod \"auto-csr-approver-29535304-6hslg\" (UID: \"11720e6e-725b-454b-b9b1-3fffeaa744e2\") " pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:00 crc kubenswrapper[4809]: I0226 15:04:00.523031 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:01 crc kubenswrapper[4809]: I0226 15:04:01.031546 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-6hslg"] Feb 26 15:04:01 crc kubenswrapper[4809]: I0226 15:04:01.218552 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-6hslg" event={"ID":"11720e6e-725b-454b-b9b1-3fffeaa744e2","Type":"ContainerStarted","Data":"25ccbf066f1bdc29f3bf43fe1790615aca1f25fdc00a8832ba8cfec4ab2e3d03"} Feb 26 15:04:03 crc kubenswrapper[4809]: I0226 15:04:03.260220 4809 generic.go:334] "Generic (PLEG): container finished" podID="11720e6e-725b-454b-b9b1-3fffeaa744e2" containerID="3a6396f10d0c28e1a82e24939c4817a80da79ac05df4139019ab34b044fc6aa2" exitCode=0 Feb 26 15:04:03 crc kubenswrapper[4809]: I0226 15:04:03.262969 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-6hslg" event={"ID":"11720e6e-725b-454b-b9b1-3fffeaa744e2","Type":"ContainerDied","Data":"3a6396f10d0c28e1a82e24939c4817a80da79ac05df4139019ab34b044fc6aa2"} Feb 26 15:04:04 crc kubenswrapper[4809]: I0226 15:04:04.678623 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:04 crc kubenswrapper[4809]: I0226 15:04:04.804339 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9629\" (UniqueName: \"kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629\") pod \"11720e6e-725b-454b-b9b1-3fffeaa744e2\" (UID: \"11720e6e-725b-454b-b9b1-3fffeaa744e2\") " Feb 26 15:04:04 crc kubenswrapper[4809]: I0226 15:04:04.815820 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629" (OuterVolumeSpecName: "kube-api-access-w9629") pod "11720e6e-725b-454b-b9b1-3fffeaa744e2" (UID: "11720e6e-725b-454b-b9b1-3fffeaa744e2"). InnerVolumeSpecName "kube-api-access-w9629". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:04:04 crc kubenswrapper[4809]: I0226 15:04:04.907772 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9629\" (UniqueName: \"kubernetes.io/projected/11720e6e-725b-454b-b9b1-3fffeaa744e2-kube-api-access-w9629\") on node \"crc\" DevicePath \"\"" Feb 26 15:04:05 crc kubenswrapper[4809]: I0226 15:04:05.288584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535304-6hslg" event={"ID":"11720e6e-725b-454b-b9b1-3fffeaa744e2","Type":"ContainerDied","Data":"25ccbf066f1bdc29f3bf43fe1790615aca1f25fdc00a8832ba8cfec4ab2e3d03"} Feb 26 15:04:05 crc kubenswrapper[4809]: I0226 15:04:05.288912 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ccbf066f1bdc29f3bf43fe1790615aca1f25fdc00a8832ba8cfec4ab2e3d03" Feb 26 15:04:05 crc kubenswrapper[4809]: I0226 15:04:05.288671 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535304-6hslg" Feb 26 15:04:05 crc kubenswrapper[4809]: I0226 15:04:05.757895 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-2hbp5"] Feb 26 15:04:05 crc kubenswrapper[4809]: I0226 15:04:05.767624 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535298-2hbp5"] Feb 26 15:04:06 crc kubenswrapper[4809]: I0226 15:04:06.274740 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9acc1d4b-e84e-4760-a5c0-ce567be35ec1" path="/var/lib/kubelet/pods/9acc1d4b-e84e-4760-a5c0-ce567be35ec1/volumes" Feb 26 15:04:12 crc kubenswrapper[4809]: I0226 15:04:12.706078 4809 scope.go:117] "RemoveContainer" containerID="81fd6b967327bf7b7e7b33a71177385b6890eeb2c3df7c4c7c58896e738525d9" Feb 26 15:05:16 crc kubenswrapper[4809]: I0226 15:05:16.220612 4809 generic.go:334] "Generic (PLEG): container finished" podID="38a4f820-36f9-46c4-b55e-bee9f76ddc4b" containerID="f44e458840964a3b47a56fc2b5d2657c0e4a65dad532e95e0cb256e82364121e" exitCode=0 Feb 26 15:05:16 crc kubenswrapper[4809]: I0226 15:05:16.220638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" event={"ID":"38a4f820-36f9-46c4-b55e-bee9f76ddc4b","Type":"ContainerDied","Data":"f44e458840964a3b47a56fc2b5d2657c0e4a65dad532e95e0cb256e82364121e"} Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.800618 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.890706 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam\") pod \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.890782 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0\") pod \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.890847 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdxh8\" (UniqueName: \"kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8\") pod \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.890921 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory\") pod \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.890996 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle\") pod \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\" (UID: \"38a4f820-36f9-46c4-b55e-bee9f76ddc4b\") " Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.896112 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "38a4f820-36f9-46c4-b55e-bee9f76ddc4b" (UID: "38a4f820-36f9-46c4-b55e-bee9f76ddc4b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.898114 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8" (OuterVolumeSpecName: "kube-api-access-cdxh8") pod "38a4f820-36f9-46c4-b55e-bee9f76ddc4b" (UID: "38a4f820-36f9-46c4-b55e-bee9f76ddc4b"). InnerVolumeSpecName "kube-api-access-cdxh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.923741 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "38a4f820-36f9-46c4-b55e-bee9f76ddc4b" (UID: "38a4f820-36f9-46c4-b55e-bee9f76ddc4b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.924938 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38a4f820-36f9-46c4-b55e-bee9f76ddc4b" (UID: "38a4f820-36f9-46c4-b55e-bee9f76ddc4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.926861 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory" (OuterVolumeSpecName: "inventory") pod "38a4f820-36f9-46c4-b55e-bee9f76ddc4b" (UID: "38a4f820-36f9-46c4-b55e-bee9f76ddc4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.993730 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.994030 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.994039 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdxh8\" (UniqueName: \"kubernetes.io/projected/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-kube-api-access-cdxh8\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.994049 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:17 crc kubenswrapper[4809]: I0226 15:05:17.994059 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4f820-36f9-46c4-b55e-bee9f76ddc4b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.245997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" event={"ID":"38a4f820-36f9-46c4-b55e-bee9f76ddc4b","Type":"ContainerDied","Data":"b25e019d89538217cdd5530717a6ef81d9079f2bc3a075b79de48ac4e8b3d5e0"} Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.246077 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25e019d89538217cdd5530717a6ef81d9079f2bc3a075b79de48ac4e8b3d5e0" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.246097 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8vngf" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.365766 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls"] Feb 26 15:05:18 crc kubenswrapper[4809]: E0226 15:05:18.366259 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a4f820-36f9-46c4-b55e-bee9f76ddc4b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.366274 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a4f820-36f9-46c4-b55e-bee9f76ddc4b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 15:05:18 crc kubenswrapper[4809]: E0226 15:05:18.366302 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11720e6e-725b-454b-b9b1-3fffeaa744e2" containerName="oc" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.366309 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="11720e6e-725b-454b-b9b1-3fffeaa744e2" containerName="oc" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.366515 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="11720e6e-725b-454b-b9b1-3fffeaa744e2" containerName="oc" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.366536 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a4f820-36f9-46c4-b55e-bee9f76ddc4b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.367304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.369738 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.370963 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.371863 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.372144 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.372867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.382593 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.382848 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.388077 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls"] Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.405824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.405884 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.405959 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.405978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406069 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406104 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406126 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6jrn\" (UniqueName: \"kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.406231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508658 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508716 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508746 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508811 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508874 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508904 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6jrn\" (UniqueName: \"kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.508982 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.510804 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.514277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.514555 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.514676 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.514795 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.515377 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.515574 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.519578 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.528444 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.534132 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.542979 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6jrn\" (UniqueName: \"kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x2sls\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:18 crc kubenswrapper[4809]: I0226 15:05:18.690575 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:05:19 crc kubenswrapper[4809]: I0226 15:05:19.251096 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls"] Feb 26 15:05:20 crc kubenswrapper[4809]: I0226 15:05:20.301571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" event={"ID":"a34f2251-97ea-4dc9-a640-1b3e489d7957","Type":"ContainerStarted","Data":"c7ca477d21d0968addddf00bcbaa757b8e17c82f97e131abc70a8762fac1b348"} Feb 26 15:05:21 crc kubenswrapper[4809]: I0226 15:05:21.316785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" event={"ID":"a34f2251-97ea-4dc9-a640-1b3e489d7957","Type":"ContainerStarted","Data":"24a031941d40b02e2a27bccb7a5465d3e5a032f8152b6f01e05033965faa2242"} Feb 26 15:05:21 crc kubenswrapper[4809]: I0226 15:05:21.345827 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" podStartSLOduration=2.5636820670000002 podStartE2EDuration="3.345803921s" podCreationTimestamp="2026-02-26 15:05:18 +0000 UTC" firstStartedPulling="2026-02-26 15:05:19.269867626 +0000 UTC m=+3097.743188149" lastFinishedPulling="2026-02-26 15:05:20.05198948 +0000 UTC m=+3098.525310003" observedRunningTime="2026-02-26 15:05:21.341732055 +0000 UTC m=+3099.815052608" watchObservedRunningTime="2026-02-26 15:05:21.345803921 +0000 UTC m=+3099.819124464" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.161768 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535306-cz24d"] Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.166238 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.169454 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.169734 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.175587 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-cz24d"] Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.179481 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.290174 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kzl\" (UniqueName: \"kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl\") pod \"auto-csr-approver-29535306-cz24d\" (UID: \"a6b5574d-b5c9-4919-b0fd-02ff95448986\") " pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.393257 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kzl\" (UniqueName: \"kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl\") pod \"auto-csr-approver-29535306-cz24d\" (UID: \"a6b5574d-b5c9-4919-b0fd-02ff95448986\") " pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.421922 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kzl\" (UniqueName: \"kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl\") pod \"auto-csr-approver-29535306-cz24d\" (UID: \"a6b5574d-b5c9-4919-b0fd-02ff95448986\") " pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:00 crc kubenswrapper[4809]: I0226 15:06:00.498813 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:01 crc kubenswrapper[4809]: I0226 15:06:01.051617 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-cz24d"] Feb 26 15:06:01 crc kubenswrapper[4809]: I0226 15:06:01.846675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-cz24d" event={"ID":"a6b5574d-b5c9-4919-b0fd-02ff95448986","Type":"ContainerStarted","Data":"948ead44255486df8707fc056acc252d21fdf489eecd41246bf1de31591d2b8f"} Feb 26 15:06:02 crc kubenswrapper[4809]: I0226 15:06:02.858145 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-cz24d" event={"ID":"a6b5574d-b5c9-4919-b0fd-02ff95448986","Type":"ContainerStarted","Data":"7c40e92912ec9b5be8b565b56c0b81770166ba150513e41d2b3a7d686fe962c4"} Feb 26 15:06:02 crc kubenswrapper[4809]: I0226 15:06:02.870273 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535306-cz24d" podStartSLOduration=1.505395351 podStartE2EDuration="2.870257635s" podCreationTimestamp="2026-02-26 15:06:00 +0000 UTC" firstStartedPulling="2026-02-26 15:06:01.054434285 +0000 UTC m=+3139.527754808" lastFinishedPulling="2026-02-26 15:06:02.419296569 +0000 UTC m=+3140.892617092" observedRunningTime="2026-02-26 15:06:02.868808884 +0000 UTC m=+3141.342129407" watchObservedRunningTime="2026-02-26 15:06:02.870257635 +0000 UTC m=+3141.343578158" Feb 26 15:06:03 crc kubenswrapper[4809]: I0226 15:06:03.871521 4809 generic.go:334] "Generic (PLEG): container finished" podID="a6b5574d-b5c9-4919-b0fd-02ff95448986" containerID="7c40e92912ec9b5be8b565b56c0b81770166ba150513e41d2b3a7d686fe962c4" exitCode=0 Feb 26 15:06:03 crc kubenswrapper[4809]: I0226 15:06:03.871576 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-cz24d" event={"ID":"a6b5574d-b5c9-4919-b0fd-02ff95448986","Type":"ContainerDied","Data":"7c40e92912ec9b5be8b565b56c0b81770166ba150513e41d2b3a7d686fe962c4"} Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.368218 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.450738 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kzl\" (UniqueName: \"kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl\") pod \"a6b5574d-b5c9-4919-b0fd-02ff95448986\" (UID: \"a6b5574d-b5c9-4919-b0fd-02ff95448986\") " Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.457875 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl" (OuterVolumeSpecName: "kube-api-access-55kzl") pod "a6b5574d-b5c9-4919-b0fd-02ff95448986" (UID: "a6b5574d-b5c9-4919-b0fd-02ff95448986"). InnerVolumeSpecName "kube-api-access-55kzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.553490 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kzl\" (UniqueName: \"kubernetes.io/projected/a6b5574d-b5c9-4919-b0fd-02ff95448986-kube-api-access-55kzl\") on node \"crc\" DevicePath \"\"" Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.898408 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535306-cz24d" event={"ID":"a6b5574d-b5c9-4919-b0fd-02ff95448986","Type":"ContainerDied","Data":"948ead44255486df8707fc056acc252d21fdf489eecd41246bf1de31591d2b8f"} Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.898742 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948ead44255486df8707fc056acc252d21fdf489eecd41246bf1de31591d2b8f" Feb 26 15:06:05 crc kubenswrapper[4809]: I0226 15:06:05.899935 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535306-cz24d" Feb 26 15:06:06 crc kubenswrapper[4809]: I0226 15:06:06.456523 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-w5bzn"] Feb 26 15:06:06 crc kubenswrapper[4809]: I0226 15:06:06.467451 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535300-w5bzn"] Feb 26 15:06:08 crc kubenswrapper[4809]: I0226 15:06:08.274124 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45f9bde-e9cb-46ee-b1fd-6c422bcfef77" path="/var/lib/kubelet/pods/b45f9bde-e9cb-46ee-b1fd-6c422bcfef77/volumes" Feb 26 15:06:11 crc kubenswrapper[4809]: I0226 15:06:11.793917 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:06:11 crc kubenswrapper[4809]: I0226 15:06:11.794256 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:06:12 crc kubenswrapper[4809]: I0226 15:06:12.871790 4809 scope.go:117] "RemoveContainer" containerID="cc26af5d93023db15c2fcd55cf44e03524d58b1bac7f1d32b4d8c95f483392ac" Feb 26 15:06:41 crc kubenswrapper[4809]: I0226 15:06:41.793952 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:06:41 crc kubenswrapper[4809]: I0226 15:06:41.794585 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:07:11 crc kubenswrapper[4809]: I0226 15:07:11.794218 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:07:11 crc kubenswrapper[4809]: I0226 15:07:11.795108 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:07:11 crc kubenswrapper[4809]: I0226 15:07:11.795189 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:07:11 crc kubenswrapper[4809]: I0226 15:07:11.796752 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:07:11 crc kubenswrapper[4809]: I0226 15:07:11.797109 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" gracePeriod=600 Feb 26 15:07:11 crc kubenswrapper[4809]: E0226 15:07:11.941195 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:07:12 crc kubenswrapper[4809]: I0226 15:07:12.808554 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" exitCode=0 Feb 26 15:07:12 crc kubenswrapper[4809]: I0226 15:07:12.808624 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c"} Feb 26 15:07:12 crc kubenswrapper[4809]: I0226 15:07:12.808679 4809 scope.go:117] "RemoveContainer" containerID="8f4b46b67247594dd13c0494d9324b4ae8e9176b456f0035e656e56381e31321" Feb 26 15:07:12 crc kubenswrapper[4809]: I0226 15:07:12.809759 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:07:12 crc kubenswrapper[4809]: E0226 15:07:12.810797 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:07:25 crc kubenswrapper[4809]: I0226 15:07:25.257215 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:07:25 crc kubenswrapper[4809]: E0226 15:07:25.257967 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:07:36 crc kubenswrapper[4809]: I0226 15:07:36.258055 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:07:36 crc kubenswrapper[4809]: E0226 15:07:36.259198 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:07:51 crc kubenswrapper[4809]: I0226 15:07:51.258109 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:07:51 crc kubenswrapper[4809]: E0226 15:07:51.258991 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:07:58 crc kubenswrapper[4809]: I0226 15:07:58.409336 4809 generic.go:334] "Generic (PLEG): container finished" podID="a34f2251-97ea-4dc9-a640-1b3e489d7957" containerID="24a031941d40b02e2a27bccb7a5465d3e5a032f8152b6f01e05033965faa2242" exitCode=0 Feb 26 15:07:58 crc kubenswrapper[4809]: I0226 15:07:58.409431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" event={"ID":"a34f2251-97ea-4dc9-a640-1b3e489d7957","Type":"ContainerDied","Data":"24a031941d40b02e2a27bccb7a5465d3e5a032f8152b6f01e05033965faa2242"} Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.917527 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959009 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6jrn\" (UniqueName: \"kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959149 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959197 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959324 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959366 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959397 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959436 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959498 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.959545 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1\") pod \"a34f2251-97ea-4dc9-a640-1b3e489d7957\" (UID: \"a34f2251-97ea-4dc9-a640-1b3e489d7957\") " Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.966006 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn" (OuterVolumeSpecName: "kube-api-access-w6jrn") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "kube-api-access-w6jrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:07:59 crc kubenswrapper[4809]: I0226 15:07:59.978051 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.013210 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.018732 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.023493 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.023594 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.032650 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.033678 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.038302 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.043495 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.058824 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory" (OuterVolumeSpecName: "inventory") pod "a34f2251-97ea-4dc9-a640-1b3e489d7957" (UID: "a34f2251-97ea-4dc9-a640-1b3e489d7957"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063133 4809 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063182 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063204 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6jrn\" (UniqueName: \"kubernetes.io/projected/a34f2251-97ea-4dc9-a640-1b3e489d7957-kube-api-access-w6jrn\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063221 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063240 4809 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063257 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063275 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063293 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063310 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063327 4809 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.063344 4809 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34f2251-97ea-4dc9-a640-1b3e489d7957-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.152888 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535308-sq2fg"] Feb 26 15:08:00 crc kubenswrapper[4809]: E0226 15:08:00.153413 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b5574d-b5c9-4919-b0fd-02ff95448986" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.153432 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b5574d-b5c9-4919-b0fd-02ff95448986" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4809]: E0226 15:08:00.153447 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34f2251-97ea-4dc9-a640-1b3e489d7957" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.153454 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34f2251-97ea-4dc9-a640-1b3e489d7957" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.153658 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34f2251-97ea-4dc9-a640-1b3e489d7957" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.153682 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b5574d-b5c9-4919-b0fd-02ff95448986" containerName="oc" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.155777 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.160079 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.160166 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.160219 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.170650 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-sq2fg"] Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.267339 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc89\" (UniqueName: \"kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89\") pod \"auto-csr-approver-29535308-sq2fg\" (UID: \"c27c6a13-5cb9-43e1-b454-96c7a5290dec\") " pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.374494 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc89\" (UniqueName: \"kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89\") pod \"auto-csr-approver-29535308-sq2fg\" (UID: \"c27c6a13-5cb9-43e1-b454-96c7a5290dec\") " pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.400432 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc89\" (UniqueName: \"kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89\") pod \"auto-csr-approver-29535308-sq2fg\" (UID: \"c27c6a13-5cb9-43e1-b454-96c7a5290dec\") " pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.435356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" event={"ID":"a34f2251-97ea-4dc9-a640-1b3e489d7957","Type":"ContainerDied","Data":"c7ca477d21d0968addddf00bcbaa757b8e17c82f97e131abc70a8762fac1b348"} Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.435570 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ca477d21d0968addddf00bcbaa757b8e17c82f97e131abc70a8762fac1b348" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.435580 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x2sls" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.488379 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.542663 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47"] Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.551927 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.554610 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.554743 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.554823 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.554953 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.557315 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.560329 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47"] Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.687058 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.687337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.687367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.687412 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.687740 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkwf\" (UniqueName: \"kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.688087 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.688162 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkwf\" (UniqueName: \"kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790229 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790259 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790355 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790381 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.790452 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.795865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.796118 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.796146 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.796556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.797049 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.798189 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.807565 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkwf\" (UniqueName: \"kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-brb47\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:00 crc kubenswrapper[4809]: I0226 15:08:00.872713 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:08:01 crc kubenswrapper[4809]: I0226 15:08:01.039748 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-sq2fg"] Feb 26 15:08:01 crc kubenswrapper[4809]: W0226 15:08:01.049546 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27c6a13_5cb9_43e1_b454_96c7a5290dec.slice/crio-25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd WatchSource:0}: Error finding container 25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd: Status 404 returned error can't find the container with id 25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd Feb 26 15:08:01 crc kubenswrapper[4809]: I0226 15:08:01.052380 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:08:01 crc kubenswrapper[4809]: I0226 15:08:01.448937 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" event={"ID":"c27c6a13-5cb9-43e1-b454-96c7a5290dec","Type":"ContainerStarted","Data":"25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd"} Feb 26 15:08:01 crc kubenswrapper[4809]: W0226 15:08:01.469005 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a1d6e3c_8131_4221_bbfa_b50c54318c94.slice/crio-b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12 WatchSource:0}: Error finding container b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12: Status 404 returned error can't find the container with id b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12 Feb 26 15:08:01 crc kubenswrapper[4809]: I0226 15:08:01.476767 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47"] Feb 26 15:08:02 crc kubenswrapper[4809]: I0226 15:08:02.460848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" event={"ID":"c27c6a13-5cb9-43e1-b454-96c7a5290dec","Type":"ContainerStarted","Data":"c85c45c679bbb85dab42d0faf596c7333000ea0d603c3670838390998295dbc2"} Feb 26 15:08:02 crc kubenswrapper[4809]: I0226 15:08:02.464088 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" event={"ID":"0a1d6e3c-8131-4221-bbfa-b50c54318c94","Type":"ContainerStarted","Data":"9643c359c870e39eb85abee4f50c35548b19075895a3c33ca7426e00d8745a97"} Feb 26 15:08:02 crc kubenswrapper[4809]: I0226 15:08:02.464171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" event={"ID":"0a1d6e3c-8131-4221-bbfa-b50c54318c94","Type":"ContainerStarted","Data":"b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12"} Feb 26 15:08:02 crc kubenswrapper[4809]: I0226 15:08:02.486136 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" podStartSLOduration=1.545442148 podStartE2EDuration="2.486114212s" podCreationTimestamp="2026-02-26 15:08:00 +0000 UTC" firstStartedPulling="2026-02-26 15:08:01.052195831 +0000 UTC m=+3259.525516354" lastFinishedPulling="2026-02-26 15:08:01.992867875 +0000 UTC m=+3260.466188418" observedRunningTime="2026-02-26 15:08:02.477414953 +0000 UTC m=+3260.950735506" watchObservedRunningTime="2026-02-26 15:08:02.486114212 +0000 UTC m=+3260.959434735" Feb 26 15:08:02 crc kubenswrapper[4809]: I0226 15:08:02.515746 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" podStartSLOduration=2.096157721 podStartE2EDuration="2.515720819s" podCreationTimestamp="2026-02-26 15:08:00 +0000 UTC" firstStartedPulling="2026-02-26 15:08:01.472699647 +0000 UTC m=+3259.946020170" lastFinishedPulling="2026-02-26 15:08:01.892262745 +0000 UTC m=+3260.365583268" observedRunningTime="2026-02-26 15:08:02.497115667 +0000 UTC m=+3260.970436230" watchObservedRunningTime="2026-02-26 15:08:02.515720819 +0000 UTC m=+3260.989041352" Feb 26 15:08:03 crc kubenswrapper[4809]: I0226 15:08:03.481392 4809 generic.go:334] "Generic (PLEG): container finished" podID="c27c6a13-5cb9-43e1-b454-96c7a5290dec" containerID="c85c45c679bbb85dab42d0faf596c7333000ea0d603c3670838390998295dbc2" exitCode=0 Feb 26 15:08:03 crc kubenswrapper[4809]: I0226 15:08:03.481462 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" event={"ID":"c27c6a13-5cb9-43e1-b454-96c7a5290dec","Type":"ContainerDied","Data":"c85c45c679bbb85dab42d0faf596c7333000ea0d603c3670838390998295dbc2"} Feb 26 15:08:04 crc kubenswrapper[4809]: I0226 15:08:04.888702 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:04 crc kubenswrapper[4809]: I0226 15:08:04.923568 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chc89\" (UniqueName: \"kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89\") pod \"c27c6a13-5cb9-43e1-b454-96c7a5290dec\" (UID: \"c27c6a13-5cb9-43e1-b454-96c7a5290dec\") " Feb 26 15:08:04 crc kubenswrapper[4809]: I0226 15:08:04.945406 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89" (OuterVolumeSpecName: "kube-api-access-chc89") pod "c27c6a13-5cb9-43e1-b454-96c7a5290dec" (UID: "c27c6a13-5cb9-43e1-b454-96c7a5290dec"). InnerVolumeSpecName "kube-api-access-chc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.027355 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chc89\" (UniqueName: \"kubernetes.io/projected/c27c6a13-5cb9-43e1-b454-96c7a5290dec-kube-api-access-chc89\") on node \"crc\" DevicePath \"\"" Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.257349 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:08:05 crc kubenswrapper[4809]: E0226 15:08:05.257760 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.358499 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-cwcjk"] Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.372036 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535302-cwcjk"] Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.517822 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" event={"ID":"c27c6a13-5cb9-43e1-b454-96c7a5290dec","Type":"ContainerDied","Data":"25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd"} Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.517869 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e96eb9ac7fdb9278d7722af0b68af211e3066d3b290c954124362eb26be5bd" Feb 26 15:08:05 crc kubenswrapper[4809]: I0226 15:08:05.517896 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535308-sq2fg" Feb 26 15:08:06 crc kubenswrapper[4809]: I0226 15:08:06.268804 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519243e5-18f0-4642-a293-6d0ec1b7ef3c" path="/var/lib/kubelet/pods/519243e5-18f0-4642-a293-6d0ec1b7ef3c/volumes" Feb 26 15:08:12 crc kubenswrapper[4809]: I0226 15:08:12.989174 4809 scope.go:117] "RemoveContainer" containerID="b40c5272e41cc656217fa75e155b8d1e773894472bdaf2eaec17fa1d2008b75e" Feb 26 15:08:20 crc kubenswrapper[4809]: I0226 15:08:20.257816 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:08:20 crc kubenswrapper[4809]: E0226 15:08:20.259311 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:08:35 crc kubenswrapper[4809]: I0226 15:08:35.257537 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:08:35 crc kubenswrapper[4809]: E0226 15:08:35.259506 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:08:47 crc kubenswrapper[4809]: I0226 15:08:47.257724 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:08:47 crc kubenswrapper[4809]: E0226 15:08:47.258798 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:00 crc kubenswrapper[4809]: I0226 15:09:00.259690 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:09:00 crc kubenswrapper[4809]: E0226 15:09:00.260635 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:15 crc kubenswrapper[4809]: I0226 15:09:15.257657 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:09:15 crc kubenswrapper[4809]: E0226 15:09:15.258836 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:27 crc kubenswrapper[4809]: I0226 15:09:27.258095 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:09:27 crc kubenswrapper[4809]: E0226 15:09:27.259032 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:41 crc kubenswrapper[4809]: I0226 15:09:41.257503 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:09:41 crc kubenswrapper[4809]: E0226 15:09:41.258361 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.871136 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:09:52 crc kubenswrapper[4809]: E0226 15:09:52.872181 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27c6a13-5cb9-43e1-b454-96c7a5290dec" containerName="oc" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.872197 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27c6a13-5cb9-43e1-b454-96c7a5290dec" containerName="oc" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.872511 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27c6a13-5cb9-43e1-b454-96c7a5290dec" containerName="oc" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.874607 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.929208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.929259 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.929684 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvx5\" (UniqueName: \"kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:52 crc kubenswrapper[4809]: I0226 15:09:52.941487 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.031534 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvx5\" (UniqueName: \"kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.031710 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.031739 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.032360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.032440 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.050514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvx5\" (UniqueName: \"kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5\") pod \"redhat-operators-228vf\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.209548 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.256618 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:09:53 crc kubenswrapper[4809]: E0226 15:09:53.256973 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.744959 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:09:53 crc kubenswrapper[4809]: I0226 15:09:53.787998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerStarted","Data":"e37c5d0230abb70b4b9f245312b7fe5d0d06d5121c145ba2d30f092f532be078"} Feb 26 15:09:54 crc kubenswrapper[4809]: I0226 15:09:54.812679 4809 generic.go:334] "Generic (PLEG): container finished" podID="f08bce22-9747-4132-9407-236aa14e3754" containerID="dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259" exitCode=0 Feb 26 15:09:54 crc kubenswrapper[4809]: I0226 15:09:54.812917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerDied","Data":"dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259"} Feb 26 15:09:55 crc kubenswrapper[4809]: I0226 15:09:55.831331 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerStarted","Data":"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4"} Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.505138 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535310-wclfk"] Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.522566 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.526181 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.526938 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.527498 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.531155 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-wclfk"] Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.666734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwrc\" (UniqueName: \"kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc\") pod \"auto-csr-approver-29535310-wclfk\" (UID: \"6f2ba682-8912-4a3d-8631-6e459d37c59c\") " pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.769600 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwrc\" (UniqueName: \"kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc\") pod \"auto-csr-approver-29535310-wclfk\" (UID: \"6f2ba682-8912-4a3d-8631-6e459d37c59c\") " pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.791048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwrc\" (UniqueName: \"kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc\") pod \"auto-csr-approver-29535310-wclfk\" (UID: \"6f2ba682-8912-4a3d-8631-6e459d37c59c\") " pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:00 crc kubenswrapper[4809]: I0226 15:10:00.858054 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:01 crc kubenswrapper[4809]: I0226 15:10:01.352397 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-wclfk"] Feb 26 15:10:02 crc kubenswrapper[4809]: I0226 15:10:02.010224 4809 generic.go:334] "Generic (PLEG): container finished" podID="f08bce22-9747-4132-9407-236aa14e3754" containerID="48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4" exitCode=0 Feb 26 15:10:02 crc kubenswrapper[4809]: I0226 15:10:02.010297 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerDied","Data":"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4"} Feb 26 15:10:02 crc kubenswrapper[4809]: I0226 15:10:02.013524 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-wclfk" event={"ID":"6f2ba682-8912-4a3d-8631-6e459d37c59c","Type":"ContainerStarted","Data":"bb6e44fd2b12cbd074b21001905fd99cdfcfac93957a3258f60f5cac4ca8492c"} Feb 26 15:10:03 crc kubenswrapper[4809]: I0226 15:10:03.033556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerStarted","Data":"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef"} Feb 26 15:10:03 crc kubenswrapper[4809]: I0226 15:10:03.063822 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-228vf" podStartSLOduration=3.443924016 podStartE2EDuration="11.063802336s" podCreationTimestamp="2026-02-26 15:09:52 +0000 UTC" firstStartedPulling="2026-02-26 15:09:54.816877266 +0000 UTC m=+3373.290197789" lastFinishedPulling="2026-02-26 15:10:02.436755576 +0000 UTC m=+3380.910076109" observedRunningTime="2026-02-26 15:10:03.052566516 +0000 UTC m=+3381.525887039" watchObservedRunningTime="2026-02-26 15:10:03.063802336 +0000 UTC m=+3381.537122859" Feb 26 15:10:03 crc kubenswrapper[4809]: I0226 15:10:03.408985 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:03 crc kubenswrapper[4809]: I0226 15:10:03.409724 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:04 crc kubenswrapper[4809]: I0226 15:10:04.516792 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-228vf" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" probeResult="failure" output=< Feb 26 15:10:04 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:10:04 crc kubenswrapper[4809]: > Feb 26 15:10:05 crc kubenswrapper[4809]: I0226 15:10:05.068844 4809 generic.go:334] "Generic (PLEG): container finished" podID="6f2ba682-8912-4a3d-8631-6e459d37c59c" containerID="5b3b089d1c56c07689ef004b1cd1d07cb134051956debbce9732b8d553c78eb8" exitCode=0 Feb 26 15:10:05 crc kubenswrapper[4809]: I0226 15:10:05.068945 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-wclfk" event={"ID":"6f2ba682-8912-4a3d-8631-6e459d37c59c","Type":"ContainerDied","Data":"5b3b089d1c56c07689ef004b1cd1d07cb134051956debbce9732b8d553c78eb8"} Feb 26 15:10:06 crc kubenswrapper[4809]: I0226 15:10:06.257005 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:10:06 crc kubenswrapper[4809]: E0226 15:10:06.257447 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:10:06 crc kubenswrapper[4809]: I0226 15:10:06.555539 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:06 crc kubenswrapper[4809]: I0226 15:10:06.704570 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mwrc\" (UniqueName: \"kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc\") pod \"6f2ba682-8912-4a3d-8631-6e459d37c59c\" (UID: \"6f2ba682-8912-4a3d-8631-6e459d37c59c\") " Feb 26 15:10:06 crc kubenswrapper[4809]: I0226 15:10:06.711238 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc" (OuterVolumeSpecName: "kube-api-access-5mwrc") pod "6f2ba682-8912-4a3d-8631-6e459d37c59c" (UID: "6f2ba682-8912-4a3d-8631-6e459d37c59c"). InnerVolumeSpecName "kube-api-access-5mwrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:06 crc kubenswrapper[4809]: I0226 15:10:06.808715 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mwrc\" (UniqueName: \"kubernetes.io/projected/6f2ba682-8912-4a3d-8631-6e459d37c59c-kube-api-access-5mwrc\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:07 crc kubenswrapper[4809]: I0226 15:10:07.094831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535310-wclfk" event={"ID":"6f2ba682-8912-4a3d-8631-6e459d37c59c","Type":"ContainerDied","Data":"bb6e44fd2b12cbd074b21001905fd99cdfcfac93957a3258f60f5cac4ca8492c"} Feb 26 15:10:07 crc kubenswrapper[4809]: I0226 15:10:07.095071 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6e44fd2b12cbd074b21001905fd99cdfcfac93957a3258f60f5cac4ca8492c" Feb 26 15:10:07 crc kubenswrapper[4809]: I0226 15:10:07.094910 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535310-wclfk" Feb 26 15:10:07 crc kubenswrapper[4809]: I0226 15:10:07.642704 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-6hslg"] Feb 26 15:10:07 crc kubenswrapper[4809]: I0226 15:10:07.656449 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535304-6hslg"] Feb 26 15:10:08 crc kubenswrapper[4809]: I0226 15:10:08.269918 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11720e6e-725b-454b-b9b1-3fffeaa744e2" path="/var/lib/kubelet/pods/11720e6e-725b-454b-b9b1-3fffeaa744e2/volumes" Feb 26 15:10:13 crc kubenswrapper[4809]: I0226 15:10:13.116674 4809 scope.go:117] "RemoveContainer" containerID="3a6396f10d0c28e1a82e24939c4817a80da79ac05df4139019ab34b044fc6aa2" Feb 26 15:10:14 crc kubenswrapper[4809]: I0226 15:10:14.257703 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-228vf" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" probeResult="failure" output=< Feb 26 15:10:14 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:10:14 crc kubenswrapper[4809]: > Feb 26 15:10:20 crc kubenswrapper[4809]: I0226 15:10:20.257638 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:10:20 crc kubenswrapper[4809]: E0226 15:10:20.258960 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:10:24 crc kubenswrapper[4809]: I0226 15:10:24.279772 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-228vf" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" probeResult="failure" output=< Feb 26 15:10:24 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:10:24 crc kubenswrapper[4809]: > Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.431558 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:26 crc kubenswrapper[4809]: E0226 15:10:26.432634 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2ba682-8912-4a3d-8631-6e459d37c59c" containerName="oc" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.432652 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2ba682-8912-4a3d-8631-6e459d37c59c" containerName="oc" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.433005 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f2ba682-8912-4a3d-8631-6e459d37c59c" containerName="oc" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.435421 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.448084 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.514049 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28nd\" (UniqueName: \"kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.514150 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.514320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.616851 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.617330 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.617401 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28nd\" (UniqueName: \"kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.617696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.617753 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.637240 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28nd\" (UniqueName: \"kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd\") pod \"redhat-marketplace-vfgth\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:26 crc kubenswrapper[4809]: I0226 15:10:26.772338 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:27 crc kubenswrapper[4809]: I0226 15:10:27.299176 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:27 crc kubenswrapper[4809]: I0226 15:10:27.343948 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerStarted","Data":"bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3"} Feb 26 15:10:28 crc kubenswrapper[4809]: I0226 15:10:28.355912 4809 generic.go:334] "Generic (PLEG): container finished" podID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerID="bca28dbab76f3cf4e5457f9688631fc258de5aa2c06672eabe1aa07916528ece" exitCode=0 Feb 26 15:10:28 crc kubenswrapper[4809]: I0226 15:10:28.355970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerDied","Data":"bca28dbab76f3cf4e5457f9688631fc258de5aa2c06672eabe1aa07916528ece"} Feb 26 15:10:29 crc kubenswrapper[4809]: I0226 15:10:29.375076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerStarted","Data":"22a2ce411adfcfb5ee148335aa0bf32a38a0e6fd8684f03e82561805c6d30c3d"} Feb 26 15:10:30 crc kubenswrapper[4809]: I0226 15:10:30.404178 4809 generic.go:334] "Generic (PLEG): container finished" podID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerID="22a2ce411adfcfb5ee148335aa0bf32a38a0e6fd8684f03e82561805c6d30c3d" exitCode=0 Feb 26 15:10:30 crc kubenswrapper[4809]: I0226 15:10:30.404280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerDied","Data":"22a2ce411adfcfb5ee148335aa0bf32a38a0e6fd8684f03e82561805c6d30c3d"} Feb 26 15:10:31 crc kubenswrapper[4809]: I0226 15:10:31.465682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerStarted","Data":"e64d61d9c98919a57b900cfcc8c945ebcb1b4158c17b779a282c16218a796354"} Feb 26 15:10:31 crc kubenswrapper[4809]: I0226 15:10:31.505538 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vfgth" podStartSLOduration=2.986437647 podStartE2EDuration="5.505518166s" podCreationTimestamp="2026-02-26 15:10:26 +0000 UTC" firstStartedPulling="2026-02-26 15:10:28.359673446 +0000 UTC m=+3406.832993969" lastFinishedPulling="2026-02-26 15:10:30.878753925 +0000 UTC m=+3409.352074488" observedRunningTime="2026-02-26 15:10:31.494551363 +0000 UTC m=+3409.967871906" watchObservedRunningTime="2026-02-26 15:10:31.505518166 +0000 UTC m=+3409.978838699" Feb 26 15:10:33 crc kubenswrapper[4809]: I0226 15:10:33.257481 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:10:33 crc kubenswrapper[4809]: E0226 15:10:33.258318 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:10:33 crc kubenswrapper[4809]: I0226 15:10:33.282675 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:33 crc kubenswrapper[4809]: I0226 15:10:33.353922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:33 crc kubenswrapper[4809]: I0226 15:10:33.802412 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:10:34 crc kubenswrapper[4809]: I0226 15:10:34.529439 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-228vf" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" containerID="cri-o://cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef" gracePeriod=2 Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.349112 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.465571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities\") pod \"f08bce22-9747-4132-9407-236aa14e3754\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.465793 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content\") pod \"f08bce22-9747-4132-9407-236aa14e3754\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.466044 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvx5\" (UniqueName: \"kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5\") pod \"f08bce22-9747-4132-9407-236aa14e3754\" (UID: \"f08bce22-9747-4132-9407-236aa14e3754\") " Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.466815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities" (OuterVolumeSpecName: "utilities") pod "f08bce22-9747-4132-9407-236aa14e3754" (UID: "f08bce22-9747-4132-9407-236aa14e3754"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.472159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5" (OuterVolumeSpecName: "kube-api-access-lkvx5") pod "f08bce22-9747-4132-9407-236aa14e3754" (UID: "f08bce22-9747-4132-9407-236aa14e3754"). InnerVolumeSpecName "kube-api-access-lkvx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.541508 4809 generic.go:334] "Generic (PLEG): container finished" podID="f08bce22-9747-4132-9407-236aa14e3754" containerID="cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef" exitCode=0 Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.541551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerDied","Data":"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef"} Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.541577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-228vf" event={"ID":"f08bce22-9747-4132-9407-236aa14e3754","Type":"ContainerDied","Data":"e37c5d0230abb70b4b9f245312b7fe5d0d06d5121c145ba2d30f092f532be078"} Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.541593 4809 scope.go:117] "RemoveContainer" containerID="cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.541726 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-228vf" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.568730 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvx5\" (UniqueName: \"kubernetes.io/projected/f08bce22-9747-4132-9407-236aa14e3754-kube-api-access-lkvx5\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.568769 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.572271 4809 scope.go:117] "RemoveContainer" containerID="48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.595691 4809 scope.go:117] "RemoveContainer" containerID="dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.610130 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f08bce22-9747-4132-9407-236aa14e3754" (UID: "f08bce22-9747-4132-9407-236aa14e3754"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.669303 4809 scope.go:117] "RemoveContainer" containerID="cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef" Feb 26 15:10:35 crc kubenswrapper[4809]: E0226 15:10:35.669775 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef\": container with ID starting with cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef not found: ID does not exist" containerID="cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.669814 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef"} err="failed to get container status \"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef\": rpc error: code = NotFound desc = could not find container \"cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef\": container with ID starting with cf80ec0df2da45b87a37d4b431219cded96ae9691b6ab2e329e2b534606ab0ef not found: ID does not exist" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.669841 4809 scope.go:117] "RemoveContainer" containerID="48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4" Feb 26 15:10:35 crc kubenswrapper[4809]: E0226 15:10:35.670428 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4\": container with ID starting with 48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4 not found: ID does not exist" containerID="48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.670487 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4"} err="failed to get container status \"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4\": rpc error: code = NotFound desc = could not find container \"48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4\": container with ID starting with 48b5d925cfa91d75efbe46101ab8f6fcedb06fcd9670feeb9dff3b4eb63906c4 not found: ID does not exist" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.670523 4809 scope.go:117] "RemoveContainer" containerID="dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259" Feb 26 15:10:35 crc kubenswrapper[4809]: E0226 15:10:35.670895 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259\": container with ID starting with dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259 not found: ID does not exist" containerID="dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.670934 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259"} err="failed to get container status \"dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259\": rpc error: code = NotFound desc = could not find container \"dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259\": container with ID starting with dbbf0be9f35b399858eefb2dd8bd75ab87e3b10afbe0d296ee4f1d56644d1259 not found: ID does not exist" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.671031 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08bce22-9747-4132-9407-236aa14e3754-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.882946 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:10:35 crc kubenswrapper[4809]: I0226 15:10:35.897138 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-228vf"] Feb 26 15:10:36 crc kubenswrapper[4809]: I0226 15:10:36.270575 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08bce22-9747-4132-9407-236aa14e3754" path="/var/lib/kubelet/pods/f08bce22-9747-4132-9407-236aa14e3754/volumes" Feb 26 15:10:36 crc kubenswrapper[4809]: I0226 15:10:36.772790 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:36 crc kubenswrapper[4809]: I0226 15:10:36.773148 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:36 crc kubenswrapper[4809]: I0226 15:10:36.843299 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:37 crc kubenswrapper[4809]: I0226 15:10:37.619497 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:40 crc kubenswrapper[4809]: I0226 15:10:40.437883 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:40 crc kubenswrapper[4809]: I0226 15:10:40.601147 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vfgth" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="registry-server" containerID="cri-o://e64d61d9c98919a57b900cfcc8c945ebcb1b4158c17b779a282c16218a796354" gracePeriod=2 Feb 26 15:10:41 crc kubenswrapper[4809]: I0226 15:10:41.623103 4809 generic.go:334] "Generic (PLEG): container finished" podID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerID="e64d61d9c98919a57b900cfcc8c945ebcb1b4158c17b779a282c16218a796354" exitCode=0 Feb 26 15:10:41 crc kubenswrapper[4809]: I0226 15:10:41.623605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerDied","Data":"e64d61d9c98919a57b900cfcc8c945ebcb1b4158c17b779a282c16218a796354"} Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.067569 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.142314 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities\") pod \"723237dc-a43a-4e87-bd44-8f952127b6ce\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.142387 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w28nd\" (UniqueName: \"kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd\") pod \"723237dc-a43a-4e87-bd44-8f952127b6ce\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.142446 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content\") pod \"723237dc-a43a-4e87-bd44-8f952127b6ce\" (UID: \"723237dc-a43a-4e87-bd44-8f952127b6ce\") " Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.143179 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities" (OuterVolumeSpecName: "utilities") pod "723237dc-a43a-4e87-bd44-8f952127b6ce" (UID: "723237dc-a43a-4e87-bd44-8f952127b6ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.152773 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd" (OuterVolumeSpecName: "kube-api-access-w28nd") pod "723237dc-a43a-4e87-bd44-8f952127b6ce" (UID: "723237dc-a43a-4e87-bd44-8f952127b6ce"). InnerVolumeSpecName "kube-api-access-w28nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.175607 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "723237dc-a43a-4e87-bd44-8f952127b6ce" (UID: "723237dc-a43a-4e87-bd44-8f952127b6ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.246211 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.246244 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w28nd\" (UniqueName: \"kubernetes.io/projected/723237dc-a43a-4e87-bd44-8f952127b6ce-kube-api-access-w28nd\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.246255 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/723237dc-a43a-4e87-bd44-8f952127b6ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.636399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfgth" event={"ID":"723237dc-a43a-4e87-bd44-8f952127b6ce","Type":"ContainerDied","Data":"bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3"} Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.636919 4809 scope.go:117] "RemoveContainer" containerID="e64d61d9c98919a57b900cfcc8c945ebcb1b4158c17b779a282c16218a796354" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.636475 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfgth" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.666298 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.673953 4809 scope.go:117] "RemoveContainer" containerID="22a2ce411adfcfb5ee148335aa0bf32a38a0e6fd8684f03e82561805c6d30c3d" Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.689440 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfgth"] Feb 26 15:10:42 crc kubenswrapper[4809]: I0226 15:10:42.750590 4809 scope.go:117] "RemoveContainer" containerID="bca28dbab76f3cf4e5457f9688631fc258de5aa2c06672eabe1aa07916528ece" Feb 26 15:10:44 crc kubenswrapper[4809]: I0226 15:10:44.271373 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" path="/var/lib/kubelet/pods/723237dc-a43a-4e87-bd44-8f952127b6ce/volumes" Feb 26 15:10:45 crc kubenswrapper[4809]: E0226 15:10:45.907971 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:10:47 crc kubenswrapper[4809]: E0226 15:10:47.500222 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:10:47 crc kubenswrapper[4809]: I0226 15:10:47.705114 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a1d6e3c-8131-4221-bbfa-b50c54318c94" containerID="9643c359c870e39eb85abee4f50c35548b19075895a3c33ca7426e00d8745a97" exitCode=0 Feb 26 15:10:47 crc kubenswrapper[4809]: I0226 15:10:47.705183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" event={"ID":"0a1d6e3c-8131-4221-bbfa-b50c54318c94","Type":"ContainerDied","Data":"9643c359c870e39eb85abee4f50c35548b19075895a3c33ca7426e00d8745a97"} Feb 26 15:10:48 crc kubenswrapper[4809]: E0226 15:10:48.104777 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:10:48 crc kubenswrapper[4809]: E0226 15:10:48.106499 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:10:48 crc kubenswrapper[4809]: I0226 15:10:48.259703 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:10:48 crc kubenswrapper[4809]: E0226 15:10:48.260170 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.384526 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.445980 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446062 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446090 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446116 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkwf\" (UniqueName: \"kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446257 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.446330 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1\") pod \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\" (UID: \"0a1d6e3c-8131-4221-bbfa-b50c54318c94\") " Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.452992 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf" (OuterVolumeSpecName: "kube-api-access-4wkwf") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "kube-api-access-4wkwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.465182 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.485872 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.486800 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory" (OuterVolumeSpecName: "inventory") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.493357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.511191 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.515316 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0a1d6e3c-8131-4221-bbfa-b50c54318c94" (UID: "0a1d6e3c-8131-4221-bbfa-b50c54318c94"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596121 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596158 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596168 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596178 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596192 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkwf\" (UniqueName: \"kubernetes.io/projected/0a1d6e3c-8131-4221-bbfa-b50c54318c94-kube-api-access-4wkwf\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596202 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.596212 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0a1d6e3c-8131-4221-bbfa-b50c54318c94-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.728703 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" event={"ID":"0a1d6e3c-8131-4221-bbfa-b50c54318c94","Type":"ContainerDied","Data":"b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12"} Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.728742 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e986e336a8853d9ce807913a0899d4d37cd8a484bb1f54cc0493e8ffdbcf12" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.728806 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-brb47" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.901863 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf"] Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902675 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a1d6e3c-8131-4221-bbfa-b50c54318c94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902693 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a1d6e3c-8131-4221-bbfa-b50c54318c94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902735 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="extract-utilities" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902743 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="extract-utilities" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902754 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="extract-content" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902760 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="extract-content" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902775 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="extract-content" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902783 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="extract-content" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902792 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902799 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902828 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="extract-utilities" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902834 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="extract-utilities" Feb 26 15:10:49 crc kubenswrapper[4809]: E0226 15:10:49.902849 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.902855 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.903081 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="723237dc-a43a-4e87-bd44-8f952127b6ce" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.903096 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a1d6e3c-8131-4221-bbfa-b50c54318c94" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.903116 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08bce22-9747-4132-9407-236aa14e3754" containerName="registry-server" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.904004 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.911479 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.911566 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.911630 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.911710 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.912711 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:10:49 crc kubenswrapper[4809]: I0226 15:10:49.915411 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf"] Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.006573 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.006646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.006821 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.006946 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.007144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.007396 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt5g\" (UniqueName: \"kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.007828 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.110642 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt5g\" (UniqueName: \"kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.110757 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.110828 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.110888 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.110951 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.111008 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.111117 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.116335 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.116350 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.116660 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.116755 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.117478 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.118534 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.127839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt5g\" (UniqueName: \"kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.226952 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:10:50 crc kubenswrapper[4809]: I0226 15:10:50.836158 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf"] Feb 26 15:10:51 crc kubenswrapper[4809]: I0226 15:10:51.747748 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" event={"ID":"29b6dce3-2861-435e-982c-63bdc94b4dca","Type":"ContainerStarted","Data":"29f5bd573a02023240bd211a6af65648094aa201376e7a3f5efc3e6d3fbda17c"} Feb 26 15:10:52 crc kubenswrapper[4809]: I0226 15:10:52.774173 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" event={"ID":"29b6dce3-2861-435e-982c-63bdc94b4dca","Type":"ContainerStarted","Data":"a66577e8a1a9f037dd645d2d428b6226cb2f64ce13280673264777983e70d38f"} Feb 26 15:10:52 crc kubenswrapper[4809]: I0226 15:10:52.820791 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" podStartSLOduration=3.237672478 podStartE2EDuration="3.820763116s" podCreationTimestamp="2026-02-26 15:10:49 +0000 UTC" firstStartedPulling="2026-02-26 15:10:50.821788959 +0000 UTC m=+3429.295109492" lastFinishedPulling="2026-02-26 15:10:51.404879587 +0000 UTC m=+3429.878200130" observedRunningTime="2026-02-26 15:10:52.8047735 +0000 UTC m=+3431.278094063" watchObservedRunningTime="2026-02-26 15:10:52.820763116 +0000 UTC m=+3431.294083659" Feb 26 15:10:56 crc kubenswrapper[4809]: E0226 15:10:56.247928 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:02 crc kubenswrapper[4809]: I0226 15:11:02.282804 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:11:02 crc kubenswrapper[4809]: E0226 15:11:02.283919 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:11:02 crc kubenswrapper[4809]: E0226 15:11:02.652184 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:06 crc kubenswrapper[4809]: E0226 15:11:06.319370 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:15 crc kubenswrapper[4809]: I0226 15:11:15.258439 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:11:15 crc kubenswrapper[4809]: E0226 15:11:15.259529 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:11:16 crc kubenswrapper[4809]: E0226 15:11:16.641066 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:17 crc kubenswrapper[4809]: E0226 15:11:17.504270 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:26 crc kubenswrapper[4809]: I0226 15:11:26.257734 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:11:26 crc kubenswrapper[4809]: E0226 15:11:26.258696 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:11:26 crc kubenswrapper[4809]: E0226 15:11:26.957564 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:32 crc kubenswrapper[4809]: E0226 15:11:32.788665 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:37 crc kubenswrapper[4809]: E0226 15:11:37.003345 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723237dc_a43a_4e87_bd44_8f952127b6ce.slice/crio-bb5766f9eeaf198f40a2db8b63537a359aa1cda52631327d98ae1f4b597d0fc3\": RecentStats: unable to find data in memory cache]" Feb 26 15:11:41 crc kubenswrapper[4809]: I0226 15:11:41.257336 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:11:41 crc kubenswrapper[4809]: E0226 15:11:41.258261 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:11:55 crc kubenswrapper[4809]: I0226 15:11:55.257236 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:11:55 crc kubenswrapper[4809]: E0226 15:11:55.258252 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.142700 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535312-j4qwp"] Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.144846 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.146702 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.147359 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.147653 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.160076 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-j4qwp"] Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.380357 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjx8\" (UniqueName: \"kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8\") pod \"auto-csr-approver-29535312-j4qwp\" (UID: \"61967b6a-d1f9-4781-b179-c509f0049e9b\") " pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.482742 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjx8\" (UniqueName: \"kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8\") pod \"auto-csr-approver-29535312-j4qwp\" (UID: \"61967b6a-d1f9-4781-b179-c509f0049e9b\") " pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.505423 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjx8\" (UniqueName: \"kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8\") pod \"auto-csr-approver-29535312-j4qwp\" (UID: \"61967b6a-d1f9-4781-b179-c509f0049e9b\") " pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:00 crc kubenswrapper[4809]: I0226 15:12:00.765753 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:01 crc kubenswrapper[4809]: I0226 15:12:01.261761 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-j4qwp"] Feb 26 15:12:01 crc kubenswrapper[4809]: I0226 15:12:01.945468 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" event={"ID":"61967b6a-d1f9-4781-b179-c509f0049e9b","Type":"ContainerStarted","Data":"92afd6b49a1d2674a9e281d2c250b12d986370c8e76d0e1de9bd3d76e25e3391"} Feb 26 15:12:07 crc kubenswrapper[4809]: I0226 15:12:07.406357 4809 generic.go:334] "Generic (PLEG): container finished" podID="61967b6a-d1f9-4781-b179-c509f0049e9b" containerID="fb4d837be23525d7d92e25da41158e0b09cb8171a1ad8a28b1963ab250188352" exitCode=0 Feb 26 15:12:07 crc kubenswrapper[4809]: I0226 15:12:07.407688 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" event={"ID":"61967b6a-d1f9-4781-b179-c509f0049e9b","Type":"ContainerDied","Data":"fb4d837be23525d7d92e25da41158e0b09cb8171a1ad8a28b1963ab250188352"} Feb 26 15:12:08 crc kubenswrapper[4809]: I0226 15:12:08.257735 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:12:08 crc kubenswrapper[4809]: E0226 15:12:08.258663 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:12:08 crc kubenswrapper[4809]: I0226 15:12:08.813650 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:08 crc kubenswrapper[4809]: I0226 15:12:08.993132 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjx8\" (UniqueName: \"kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8\") pod \"61967b6a-d1f9-4781-b179-c509f0049e9b\" (UID: \"61967b6a-d1f9-4781-b179-c509f0049e9b\") " Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.000397 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8" (OuterVolumeSpecName: "kube-api-access-bgjx8") pod "61967b6a-d1f9-4781-b179-c509f0049e9b" (UID: "61967b6a-d1f9-4781-b179-c509f0049e9b"). InnerVolumeSpecName "kube-api-access-bgjx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.096443 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjx8\" (UniqueName: \"kubernetes.io/projected/61967b6a-d1f9-4781-b179-c509f0049e9b-kube-api-access-bgjx8\") on node \"crc\" DevicePath \"\"" Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.430714 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" event={"ID":"61967b6a-d1f9-4781-b179-c509f0049e9b","Type":"ContainerDied","Data":"92afd6b49a1d2674a9e281d2c250b12d986370c8e76d0e1de9bd3d76e25e3391"} Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.430773 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92afd6b49a1d2674a9e281d2c250b12d986370c8e76d0e1de9bd3d76e25e3391" Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.430852 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535312-j4qwp" Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.923547 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-cz24d"] Feb 26 15:12:09 crc kubenswrapper[4809]: I0226 15:12:09.932733 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535306-cz24d"] Feb 26 15:12:10 crc kubenswrapper[4809]: I0226 15:12:10.268627 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b5574d-b5c9-4919-b0fd-02ff95448986" path="/var/lib/kubelet/pods/a6b5574d-b5c9-4919-b0fd-02ff95448986/volumes" Feb 26 15:12:13 crc kubenswrapper[4809]: I0226 15:12:13.274410 4809 scope.go:117] "RemoveContainer" containerID="7c40e92912ec9b5be8b565b56c0b81770166ba150513e41d2b3a7d686fe962c4" Feb 26 15:12:22 crc kubenswrapper[4809]: I0226 15:12:22.275189 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:12:23 crc kubenswrapper[4809]: I0226 15:12:23.605330 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1"} Feb 26 15:12:30 crc kubenswrapper[4809]: I0226 15:12:30.972911 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:30 crc kubenswrapper[4809]: E0226 15:12:30.974354 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61967b6a-d1f9-4781-b179-c509f0049e9b" containerName="oc" Feb 26 15:12:30 crc kubenswrapper[4809]: I0226 15:12:30.974373 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="61967b6a-d1f9-4781-b179-c509f0049e9b" containerName="oc" Feb 26 15:12:30 crc kubenswrapper[4809]: I0226 15:12:30.974995 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="61967b6a-d1f9-4781-b179-c509f0049e9b" containerName="oc" Feb 26 15:12:30 crc kubenswrapper[4809]: I0226 15:12:30.978869 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.004513 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5w9f\" (UniqueName: \"kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.005304 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.005594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.010264 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.107781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.107890 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.108449 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.108462 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.108604 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5w9f\" (UniqueName: \"kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.133495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5w9f\" (UniqueName: \"kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f\") pod \"community-operators-tr7tl\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:31 crc kubenswrapper[4809]: I0226 15:12:31.312478 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:32 crc kubenswrapper[4809]: I0226 15:12:32.011271 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:32 crc kubenswrapper[4809]: I0226 15:12:32.742904 4809 generic.go:334] "Generic (PLEG): container finished" podID="32ef9164-19ff-4a97-99bc-fced798f9497" containerID="d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59" exitCode=0 Feb 26 15:12:32 crc kubenswrapper[4809]: I0226 15:12:32.742958 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerDied","Data":"d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59"} Feb 26 15:12:32 crc kubenswrapper[4809]: I0226 15:12:32.743250 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerStarted","Data":"e188e04b6c834f6041965f2ec25e89d22d178fdf1d00709af9b46d1ce904d77d"} Feb 26 15:12:34 crc kubenswrapper[4809]: I0226 15:12:34.765810 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerStarted","Data":"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63"} Feb 26 15:12:35 crc kubenswrapper[4809]: I0226 15:12:35.781891 4809 generic.go:334] "Generic (PLEG): container finished" podID="32ef9164-19ff-4a97-99bc-fced798f9497" containerID="fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63" exitCode=0 Feb 26 15:12:35 crc kubenswrapper[4809]: I0226 15:12:35.781979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerDied","Data":"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63"} Feb 26 15:12:36 crc kubenswrapper[4809]: I0226 15:12:36.794185 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerStarted","Data":"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321"} Feb 26 15:12:36 crc kubenswrapper[4809]: I0226 15:12:36.817115 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tr7tl" podStartSLOduration=3.354954371 podStartE2EDuration="6.817091956s" podCreationTimestamp="2026-02-26 15:12:30 +0000 UTC" firstStartedPulling="2026-02-26 15:12:32.745861114 +0000 UTC m=+3531.219181647" lastFinishedPulling="2026-02-26 15:12:36.207998689 +0000 UTC m=+3534.681319232" observedRunningTime="2026-02-26 15:12:36.808977335 +0000 UTC m=+3535.282297868" watchObservedRunningTime="2026-02-26 15:12:36.817091956 +0000 UTC m=+3535.290412479" Feb 26 15:12:41 crc kubenswrapper[4809]: I0226 15:12:41.313730 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:41 crc kubenswrapper[4809]: I0226 15:12:41.314417 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:41 crc kubenswrapper[4809]: I0226 15:12:41.375727 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:41 crc kubenswrapper[4809]: I0226 15:12:41.932413 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:41 crc kubenswrapper[4809]: I0226 15:12:41.992830 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:43 crc kubenswrapper[4809]: I0226 15:12:43.944208 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tr7tl" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="registry-server" containerID="cri-o://cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321" gracePeriod=2 Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.556397 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.652552 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5w9f\" (UniqueName: \"kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f\") pod \"32ef9164-19ff-4a97-99bc-fced798f9497\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.652907 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities\") pod \"32ef9164-19ff-4a97-99bc-fced798f9497\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.653421 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content\") pod \"32ef9164-19ff-4a97-99bc-fced798f9497\" (UID: \"32ef9164-19ff-4a97-99bc-fced798f9497\") " Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.653637 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities" (OuterVolumeSpecName: "utilities") pod "32ef9164-19ff-4a97-99bc-fced798f9497" (UID: "32ef9164-19ff-4a97-99bc-fced798f9497"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.654513 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.663424 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f" (OuterVolumeSpecName: "kube-api-access-c5w9f") pod "32ef9164-19ff-4a97-99bc-fced798f9497" (UID: "32ef9164-19ff-4a97-99bc-fced798f9497"). InnerVolumeSpecName "kube-api-access-c5w9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.715398 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ef9164-19ff-4a97-99bc-fced798f9497" (UID: "32ef9164-19ff-4a97-99bc-fced798f9497"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.756669 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5w9f\" (UniqueName: \"kubernetes.io/projected/32ef9164-19ff-4a97-99bc-fced798f9497-kube-api-access-c5w9f\") on node \"crc\" DevicePath \"\"" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.756909 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ef9164-19ff-4a97-99bc-fced798f9497-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.962299 4809 generic.go:334] "Generic (PLEG): container finished" podID="32ef9164-19ff-4a97-99bc-fced798f9497" containerID="cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321" exitCode=0 Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.962357 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerDied","Data":"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321"} Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.962382 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tr7tl" Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.962405 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tr7tl" event={"ID":"32ef9164-19ff-4a97-99bc-fced798f9497","Type":"ContainerDied","Data":"e188e04b6c834f6041965f2ec25e89d22d178fdf1d00709af9b46d1ce904d77d"} Feb 26 15:12:44 crc kubenswrapper[4809]: I0226 15:12:44.962428 4809 scope.go:117] "RemoveContainer" containerID="cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.014741 4809 scope.go:117] "RemoveContainer" containerID="fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.019302 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.029977 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tr7tl"] Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.051559 4809 scope.go:117] "RemoveContainer" containerID="d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.139665 4809 scope.go:117] "RemoveContainer" containerID="cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321" Feb 26 15:12:45 crc kubenswrapper[4809]: E0226 15:12:45.140213 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321\": container with ID starting with cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321 not found: ID does not exist" containerID="cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.140319 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321"} err="failed to get container status \"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321\": rpc error: code = NotFound desc = could not find container \"cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321\": container with ID starting with cda8f3c03bf298b75340cab04c51af79dcc51bd2a5a16f5c9cd89ff7dbd51321 not found: ID does not exist" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.140408 4809 scope.go:117] "RemoveContainer" containerID="fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63" Feb 26 15:12:45 crc kubenswrapper[4809]: E0226 15:12:45.140812 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63\": container with ID starting with fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63 not found: ID does not exist" containerID="fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.140846 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63"} err="failed to get container status \"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63\": rpc error: code = NotFound desc = could not find container \"fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63\": container with ID starting with fb81a48d98af9e054123bf40e06d910a439b8adfb56f415ce99c7a47d2d9ce63 not found: ID does not exist" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.140875 4809 scope.go:117] "RemoveContainer" containerID="d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59" Feb 26 15:12:45 crc kubenswrapper[4809]: E0226 15:12:45.141276 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59\": container with ID starting with d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59 not found: ID does not exist" containerID="d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59" Feb 26 15:12:45 crc kubenswrapper[4809]: I0226 15:12:45.141364 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59"} err="failed to get container status \"d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59\": rpc error: code = NotFound desc = could not find container \"d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59\": container with ID starting with d94b9cbdd366aeb2871ef8fd155c879ac49775ad4a6d9a378d39753193de0f59 not found: ID does not exist" Feb 26 15:12:46 crc kubenswrapper[4809]: I0226 15:12:46.268327 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" path="/var/lib/kubelet/pods/32ef9164-19ff-4a97-99bc-fced798f9497/volumes" Feb 26 15:13:19 crc kubenswrapper[4809]: I0226 15:13:19.469811 4809 generic.go:334] "Generic (PLEG): container finished" podID="29b6dce3-2861-435e-982c-63bdc94b4dca" containerID="a66577e8a1a9f037dd645d2d428b6226cb2f64ce13280673264777983e70d38f" exitCode=0 Feb 26 15:13:19 crc kubenswrapper[4809]: I0226 15:13:19.469923 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" event={"ID":"29b6dce3-2861-435e-982c-63bdc94b4dca","Type":"ContainerDied","Data":"a66577e8a1a9f037dd645d2d428b6226cb2f64ce13280673264777983e70d38f"} Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.039944 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.129419 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.129710 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwt5g\" (UniqueName: \"kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.129924 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.130100 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.130489 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.130728 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.131049 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle\") pod \"29b6dce3-2861-435e-982c-63bdc94b4dca\" (UID: \"29b6dce3-2861-435e-982c-63bdc94b4dca\") " Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.136771 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g" (OuterVolumeSpecName: "kube-api-access-zwt5g") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "kube-api-access-zwt5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.139367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.182133 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.184522 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.195396 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory" (OuterVolumeSpecName: "inventory") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.195896 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.203045 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "29b6dce3-2861-435e-982c-63bdc94b4dca" (UID: "29b6dce3-2861-435e-982c-63bdc94b4dca"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.239972 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240096 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240123 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwt5g\" (UniqueName: \"kubernetes.io/projected/29b6dce3-2861-435e-982c-63bdc94b4dca-kube-api-access-zwt5g\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240153 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240173 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240190 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.240207 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b6dce3-2861-435e-982c-63bdc94b4dca-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.498334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" event={"ID":"29b6dce3-2861-435e-982c-63bdc94b4dca","Type":"ContainerDied","Data":"29f5bd573a02023240bd211a6af65648094aa201376e7a3f5efc3e6d3fbda17c"} Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.498813 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f5bd573a02023240bd211a6af65648094aa201376e7a3f5efc3e6d3fbda17c" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.498529 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.643972 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4"] Feb 26 15:13:21 crc kubenswrapper[4809]: E0226 15:13:21.644460 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b6dce3-2861-435e-982c-63bdc94b4dca" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644480 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b6dce3-2861-435e-982c-63bdc94b4dca" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 15:13:21 crc kubenswrapper[4809]: E0226 15:13:21.644497 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="extract-content" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644505 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="extract-content" Feb 26 15:13:21 crc kubenswrapper[4809]: E0226 15:13:21.644515 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="registry-server" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644523 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="registry-server" Feb 26 15:13:21 crc kubenswrapper[4809]: E0226 15:13:21.644559 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="extract-utilities" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644566 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="extract-utilities" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644790 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ef9164-19ff-4a97-99bc-fced798f9497" containerName="registry-server" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.644821 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b6dce3-2861-435e-982c-63bdc94b4dca" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.645797 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.650924 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.651694 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.651829 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.651939 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.660613 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7hcmb" Feb 26 15:13:21 crc kubenswrapper[4809]: I0226 15:13:21.677645 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4"] Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.753043 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.753117 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.753277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtds\" (UniqueName: \"kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.753327 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.753385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.855624 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbtds\" (UniqueName: \"kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.855691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.855758 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.855892 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.855935 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.861674 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.862817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.863790 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.865764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.883277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbtds\" (UniqueName: \"kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds\") pod \"logging-edpm-deployment-openstack-edpm-ipam-8w8l4\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:22 crc kubenswrapper[4809]: I0226 15:13:21.975539 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:23 crc kubenswrapper[4809]: I0226 15:13:23.496186 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4"] Feb 26 15:13:23 crc kubenswrapper[4809]: W0226 15:13:23.496239 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod591f9782_8d8e_4f26_9675_a3d7b7b66493.slice/crio-bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26 WatchSource:0}: Error finding container bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26: Status 404 returned error can't find the container with id bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26 Feb 26 15:13:23 crc kubenswrapper[4809]: I0226 15:13:23.500084 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:13:23 crc kubenswrapper[4809]: I0226 15:13:23.536723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" event={"ID":"591f9782-8d8e-4f26-9675-a3d7b7b66493","Type":"ContainerStarted","Data":"bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26"} Feb 26 15:13:25 crc kubenswrapper[4809]: I0226 15:13:25.564712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" event={"ID":"591f9782-8d8e-4f26-9675-a3d7b7b66493","Type":"ContainerStarted","Data":"ff6d75316ec8df1a484947f374596db4a977254981794a60d857b4065ffd96ee"} Feb 26 15:13:25 crc kubenswrapper[4809]: I0226 15:13:25.596476 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" podStartSLOduration=3.8339982089999998 podStartE2EDuration="4.596453688s" podCreationTimestamp="2026-02-26 15:13:21 +0000 UTC" firstStartedPulling="2026-02-26 15:13:23.499821427 +0000 UTC m=+3581.973141950" lastFinishedPulling="2026-02-26 15:13:24.262276906 +0000 UTC m=+3582.735597429" observedRunningTime="2026-02-26 15:13:25.586518124 +0000 UTC m=+3584.059838647" watchObservedRunningTime="2026-02-26 15:13:25.596453688 +0000 UTC m=+3584.069774231" Feb 26 15:13:40 crc kubenswrapper[4809]: I0226 15:13:40.746533 4809 generic.go:334] "Generic (PLEG): container finished" podID="591f9782-8d8e-4f26-9675-a3d7b7b66493" containerID="ff6d75316ec8df1a484947f374596db4a977254981794a60d857b4065ffd96ee" exitCode=0 Feb 26 15:13:40 crc kubenswrapper[4809]: I0226 15:13:40.746566 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" event={"ID":"591f9782-8d8e-4f26-9675-a3d7b7b66493","Type":"ContainerDied","Data":"ff6d75316ec8df1a484947f374596db4a977254981794a60d857b4065ffd96ee"} Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.297551 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.431549 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbtds\" (UniqueName: \"kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds\") pod \"591f9782-8d8e-4f26-9675-a3d7b7b66493\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.432124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam\") pod \"591f9782-8d8e-4f26-9675-a3d7b7b66493\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.432158 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0\") pod \"591f9782-8d8e-4f26-9675-a3d7b7b66493\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.432278 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1\") pod \"591f9782-8d8e-4f26-9675-a3d7b7b66493\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.432322 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory\") pod \"591f9782-8d8e-4f26-9675-a3d7b7b66493\" (UID: \"591f9782-8d8e-4f26-9675-a3d7b7b66493\") " Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.445916 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds" (OuterVolumeSpecName: "kube-api-access-fbtds") pod "591f9782-8d8e-4f26-9675-a3d7b7b66493" (UID: "591f9782-8d8e-4f26-9675-a3d7b7b66493"). InnerVolumeSpecName "kube-api-access-fbtds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.462353 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "591f9782-8d8e-4f26-9675-a3d7b7b66493" (UID: "591f9782-8d8e-4f26-9675-a3d7b7b66493"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.463930 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory" (OuterVolumeSpecName: "inventory") pod "591f9782-8d8e-4f26-9675-a3d7b7b66493" (UID: "591f9782-8d8e-4f26-9675-a3d7b7b66493"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.469763 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "591f9782-8d8e-4f26-9675-a3d7b7b66493" (UID: "591f9782-8d8e-4f26-9675-a3d7b7b66493"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.473604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "591f9782-8d8e-4f26-9675-a3d7b7b66493" (UID: "591f9782-8d8e-4f26-9675-a3d7b7b66493"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.536158 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.536209 4809 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.536230 4809 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.536255 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/591f9782-8d8e-4f26-9675-a3d7b7b66493-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.538088 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbtds\" (UniqueName: \"kubernetes.io/projected/591f9782-8d8e-4f26-9675-a3d7b7b66493-kube-api-access-fbtds\") on node \"crc\" DevicePath \"\"" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.782303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" event={"ID":"591f9782-8d8e-4f26-9675-a3d7b7b66493","Type":"ContainerDied","Data":"bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26"} Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.782345 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb48d3804dc3003c8dccfc163adf8ffeb5ac33bf3ac36c69d2fbf4be0613cf26" Feb 26 15:13:42 crc kubenswrapper[4809]: I0226 15:13:42.782434 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-8w8l4" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.149962 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535314-fts7l"] Feb 26 15:14:00 crc kubenswrapper[4809]: E0226 15:14:00.150942 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591f9782-8d8e-4f26-9675-a3d7b7b66493" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.150955 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="591f9782-8d8e-4f26-9675-a3d7b7b66493" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.151261 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="591f9782-8d8e-4f26-9675-a3d7b7b66493" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.152157 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.155350 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.155771 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.156117 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.160784 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-fts7l"] Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.260679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bd8q\" (UniqueName: \"kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q\") pod \"auto-csr-approver-29535314-fts7l\" (UID: \"c749b7d7-5b88-487f-9673-9e5ccee431fa\") " pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.362790 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bd8q\" (UniqueName: \"kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q\") pod \"auto-csr-approver-29535314-fts7l\" (UID: \"c749b7d7-5b88-487f-9673-9e5ccee431fa\") " pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.400996 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bd8q\" (UniqueName: \"kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q\") pod \"auto-csr-approver-29535314-fts7l\" (UID: \"c749b7d7-5b88-487f-9673-9e5ccee431fa\") " pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:00 crc kubenswrapper[4809]: I0226 15:14:00.477723 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:01 crc kubenswrapper[4809]: I0226 15:14:01.034053 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-fts7l"] Feb 26 15:14:01 crc kubenswrapper[4809]: I0226 15:14:01.046372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-fts7l" event={"ID":"c749b7d7-5b88-487f-9673-9e5ccee431fa","Type":"ContainerStarted","Data":"5947033074ad09ab8cce6e731abc5f770b2fb9914ff068878790c839510071ab"} Feb 26 15:14:03 crc kubenswrapper[4809]: I0226 15:14:03.074927 4809 generic.go:334] "Generic (PLEG): container finished" podID="c749b7d7-5b88-487f-9673-9e5ccee431fa" containerID="8b132cf59776aa053c26c690d3ec9b52dc5ba09a529d1094f596ffe82bc48363" exitCode=0 Feb 26 15:14:03 crc kubenswrapper[4809]: I0226 15:14:03.075062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-fts7l" event={"ID":"c749b7d7-5b88-487f-9673-9e5ccee431fa","Type":"ContainerDied","Data":"8b132cf59776aa053c26c690d3ec9b52dc5ba09a529d1094f596ffe82bc48363"} Feb 26 15:14:04 crc kubenswrapper[4809]: I0226 15:14:04.602398 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:04 crc kubenswrapper[4809]: I0226 15:14:04.695719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bd8q\" (UniqueName: \"kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q\") pod \"c749b7d7-5b88-487f-9673-9e5ccee431fa\" (UID: \"c749b7d7-5b88-487f-9673-9e5ccee431fa\") " Feb 26 15:14:04 crc kubenswrapper[4809]: I0226 15:14:04.707350 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q" (OuterVolumeSpecName: "kube-api-access-9bd8q") pod "c749b7d7-5b88-487f-9673-9e5ccee431fa" (UID: "c749b7d7-5b88-487f-9673-9e5ccee431fa"). InnerVolumeSpecName "kube-api-access-9bd8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:14:04 crc kubenswrapper[4809]: I0226 15:14:04.798850 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bd8q\" (UniqueName: \"kubernetes.io/projected/c749b7d7-5b88-487f-9673-9e5ccee431fa-kube-api-access-9bd8q\") on node \"crc\" DevicePath \"\"" Feb 26 15:14:05 crc kubenswrapper[4809]: I0226 15:14:05.098351 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535314-fts7l" event={"ID":"c749b7d7-5b88-487f-9673-9e5ccee431fa","Type":"ContainerDied","Data":"5947033074ad09ab8cce6e731abc5f770b2fb9914ff068878790c839510071ab"} Feb 26 15:14:05 crc kubenswrapper[4809]: I0226 15:14:05.098411 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5947033074ad09ab8cce6e731abc5f770b2fb9914ff068878790c839510071ab" Feb 26 15:14:05 crc kubenswrapper[4809]: I0226 15:14:05.098387 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535314-fts7l" Feb 26 15:14:05 crc kubenswrapper[4809]: I0226 15:14:05.694256 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-sq2fg"] Feb 26 15:14:05 crc kubenswrapper[4809]: I0226 15:14:05.705993 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535308-sq2fg"] Feb 26 15:14:06 crc kubenswrapper[4809]: I0226 15:14:06.272923 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27c6a13-5cb9-43e1-b454-96c7a5290dec" path="/var/lib/kubelet/pods/c27c6a13-5cb9-43e1-b454-96c7a5290dec/volumes" Feb 26 15:14:13 crc kubenswrapper[4809]: I0226 15:14:13.460345 4809 scope.go:117] "RemoveContainer" containerID="c85c45c679bbb85dab42d0faf596c7333000ea0d603c3670838390998295dbc2" Feb 26 15:14:41 crc kubenswrapper[4809]: I0226 15:14:41.793433 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:14:41 crc kubenswrapper[4809]: I0226 15:14:41.793949 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.585105 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:14:59 crc kubenswrapper[4809]: E0226 15:14:59.586598 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c749b7d7-5b88-487f-9673-9e5ccee431fa" containerName="oc" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.586624 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c749b7d7-5b88-487f-9673-9e5ccee431fa" containerName="oc" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.587108 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c749b7d7-5b88-487f-9673-9e5ccee431fa" containerName="oc" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.600721 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.601113 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.685170 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchx7\" (UniqueName: \"kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.685231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.685465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.787665 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchx7\" (UniqueName: \"kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.787748 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.787929 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.788494 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.788559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.810162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchx7\" (UniqueName: \"kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7\") pod \"certified-operators-dbq2z\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:14:59 crc kubenswrapper[4809]: I0226 15:14:59.940273 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.148273 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n"] Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.152141 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.162149 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.162294 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n"] Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.162329 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.199339 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.199394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.199591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc5g\" (UniqueName: \"kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.301314 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc5g\" (UniqueName: \"kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.301507 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.301546 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.303839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.314739 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.324757 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc5g\" (UniqueName: \"kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g\") pod \"collect-profiles-29535315-jfc7n\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.446854 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.491638 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.943140 4809 generic.go:334] "Generic (PLEG): container finished" podID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerID="339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0" exitCode=0 Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.943193 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerDied","Data":"339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0"} Feb 26 15:15:00 crc kubenswrapper[4809]: I0226 15:15:00.943422 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerStarted","Data":"5611c359938c739f0e9bc48299dd61e7e1181f286267b62e2878e5a23d4fd2a9"} Feb 26 15:15:01 crc kubenswrapper[4809]: I0226 15:15:01.010395 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n"] Feb 26 15:15:01 crc kubenswrapper[4809]: I0226 15:15:01.958997 4809 generic.go:334] "Generic (PLEG): container finished" podID="823f865d-0e91-4975-8c47-bc6a61a1a027" containerID="ff397b7f04e4d9d07665e62d56faf289a0939a05e06f3bbe03185757bbfb93c6" exitCode=0 Feb 26 15:15:01 crc kubenswrapper[4809]: I0226 15:15:01.959060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" event={"ID":"823f865d-0e91-4975-8c47-bc6a61a1a027","Type":"ContainerDied","Data":"ff397b7f04e4d9d07665e62d56faf289a0939a05e06f3bbe03185757bbfb93c6"} Feb 26 15:15:01 crc kubenswrapper[4809]: I0226 15:15:01.959655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" event={"ID":"823f865d-0e91-4975-8c47-bc6a61a1a027","Type":"ContainerStarted","Data":"959a95b834b241b374eed087f0eee3af8285b0822d6697353daef0556bd62a31"} Feb 26 15:15:02 crc kubenswrapper[4809]: I0226 15:15:02.981786 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerStarted","Data":"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01"} Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.422516 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.483763 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bc5g\" (UniqueName: \"kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g\") pod \"823f865d-0e91-4975-8c47-bc6a61a1a027\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.483979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume\") pod \"823f865d-0e91-4975-8c47-bc6a61a1a027\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.484155 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume\") pod \"823f865d-0e91-4975-8c47-bc6a61a1a027\" (UID: \"823f865d-0e91-4975-8c47-bc6a61a1a027\") " Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.484527 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume" (OuterVolumeSpecName: "config-volume") pod "823f865d-0e91-4975-8c47-bc6a61a1a027" (UID: "823f865d-0e91-4975-8c47-bc6a61a1a027"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.485498 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/823f865d-0e91-4975-8c47-bc6a61a1a027-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.489856 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "823f865d-0e91-4975-8c47-bc6a61a1a027" (UID: "823f865d-0e91-4975-8c47-bc6a61a1a027"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.504878 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g" (OuterVolumeSpecName: "kube-api-access-7bc5g") pod "823f865d-0e91-4975-8c47-bc6a61a1a027" (UID: "823f865d-0e91-4975-8c47-bc6a61a1a027"). InnerVolumeSpecName "kube-api-access-7bc5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.587461 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/823f865d-0e91-4975-8c47-bc6a61a1a027-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:03 crc kubenswrapper[4809]: I0226 15:15:03.587676 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bc5g\" (UniqueName: \"kubernetes.io/projected/823f865d-0e91-4975-8c47-bc6a61a1a027-kube-api-access-7bc5g\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.001857 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.001898 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n" event={"ID":"823f865d-0e91-4975-8c47-bc6a61a1a027","Type":"ContainerDied","Data":"959a95b834b241b374eed087f0eee3af8285b0822d6697353daef0556bd62a31"} Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.001965 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959a95b834b241b374eed087f0eee3af8285b0822d6697353daef0556bd62a31" Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.014151 4809 generic.go:334] "Generic (PLEG): container finished" podID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerID="f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01" exitCode=0 Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.014458 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerDied","Data":"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01"} Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.513630 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4"] Feb 26 15:15:04 crc kubenswrapper[4809]: I0226 15:15:04.526484 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535270-g7xf4"] Feb 26 15:15:05 crc kubenswrapper[4809]: I0226 15:15:05.035237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerStarted","Data":"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d"} Feb 26 15:15:05 crc kubenswrapper[4809]: I0226 15:15:05.057949 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbq2z" podStartSLOduration=2.556121889 podStartE2EDuration="6.057922763s" podCreationTimestamp="2026-02-26 15:14:59 +0000 UTC" firstStartedPulling="2026-02-26 15:15:00.944873619 +0000 UTC m=+3679.418194142" lastFinishedPulling="2026-02-26 15:15:04.446674493 +0000 UTC m=+3682.919995016" observedRunningTime="2026-02-26 15:15:05.055750861 +0000 UTC m=+3683.529071414" watchObservedRunningTime="2026-02-26 15:15:05.057922763 +0000 UTC m=+3683.531243326" Feb 26 15:15:06 crc kubenswrapper[4809]: I0226 15:15:06.273189 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6de658b-2510-4a3c-a895-39e7b760b5e2" path="/var/lib/kubelet/pods/d6de658b-2510-4a3c-a895-39e7b760b5e2/volumes" Feb 26 15:15:09 crc kubenswrapper[4809]: I0226 15:15:09.941167 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:09 crc kubenswrapper[4809]: I0226 15:15:09.941641 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:10 crc kubenswrapper[4809]: I0226 15:15:10.036100 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:10 crc kubenswrapper[4809]: I0226 15:15:10.163595 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:10 crc kubenswrapper[4809]: I0226 15:15:10.298109 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:15:11 crc kubenswrapper[4809]: I0226 15:15:11.793900 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:15:11 crc kubenswrapper[4809]: I0226 15:15:11.794708 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.111126 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dbq2z" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="registry-server" containerID="cri-o://51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d" gracePeriod=2 Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.619395 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.739654 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchx7\" (UniqueName: \"kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7\") pod \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.739730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content\") pod \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.739890 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities\") pod \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\" (UID: \"cef2909e-feed-48d2-a86f-c2fc57e2e8b0\") " Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.741109 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities" (OuterVolumeSpecName: "utilities") pod "cef2909e-feed-48d2-a86f-c2fc57e2e8b0" (UID: "cef2909e-feed-48d2-a86f-c2fc57e2e8b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.747576 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7" (OuterVolumeSpecName: "kube-api-access-xchx7") pod "cef2909e-feed-48d2-a86f-c2fc57e2e8b0" (UID: "cef2909e-feed-48d2-a86f-c2fc57e2e8b0"). InnerVolumeSpecName "kube-api-access-xchx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.810197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cef2909e-feed-48d2-a86f-c2fc57e2e8b0" (UID: "cef2909e-feed-48d2-a86f-c2fc57e2e8b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.843749 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchx7\" (UniqueName: \"kubernetes.io/projected/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-kube-api-access-xchx7\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.844008 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:12 crc kubenswrapper[4809]: I0226 15:15:12.844152 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef2909e-feed-48d2-a86f-c2fc57e2e8b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.132996 4809 generic.go:334] "Generic (PLEG): container finished" podID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerID="51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d" exitCode=0 Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.133081 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerDied","Data":"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d"} Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.133146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbq2z" event={"ID":"cef2909e-feed-48d2-a86f-c2fc57e2e8b0","Type":"ContainerDied","Data":"5611c359938c739f0e9bc48299dd61e7e1181f286267b62e2878e5a23d4fd2a9"} Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.133179 4809 scope.go:117] "RemoveContainer" containerID="51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.133507 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbq2z" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.189002 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.190550 4809 scope.go:117] "RemoveContainer" containerID="f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.204983 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dbq2z"] Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.224515 4809 scope.go:117] "RemoveContainer" containerID="339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.295832 4809 scope.go:117] "RemoveContainer" containerID="51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d" Feb 26 15:15:13 crc kubenswrapper[4809]: E0226 15:15:13.296986 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d\": container with ID starting with 51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d not found: ID does not exist" containerID="51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.297100 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d"} err="failed to get container status \"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d\": rpc error: code = NotFound desc = could not find container \"51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d\": container with ID starting with 51ad50da309747a65dd0bc3f436307e7df28b3f4b7cebd1a9ac3a89e44825c8d not found: ID does not exist" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.297130 4809 scope.go:117] "RemoveContainer" containerID="f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01" Feb 26 15:15:13 crc kubenswrapper[4809]: E0226 15:15:13.298003 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01\": container with ID starting with f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01 not found: ID does not exist" containerID="f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.298084 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01"} err="failed to get container status \"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01\": rpc error: code = NotFound desc = could not find container \"f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01\": container with ID starting with f9a8853ac4cfc8d40418d6cdaabadec157828891273e9b3934c4c554be7d2d01 not found: ID does not exist" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.298105 4809 scope.go:117] "RemoveContainer" containerID="339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0" Feb 26 15:15:13 crc kubenswrapper[4809]: E0226 15:15:13.303157 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0\": container with ID starting with 339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0 not found: ID does not exist" containerID="339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.303200 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0"} err="failed to get container status \"339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0\": rpc error: code = NotFound desc = could not find container \"339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0\": container with ID starting with 339b672536be3e83a47ab35a6362389bcf33f993479d9aac89edd05e95194ec0 not found: ID does not exist" Feb 26 15:15:13 crc kubenswrapper[4809]: I0226 15:15:13.626792 4809 scope.go:117] "RemoveContainer" containerID="39190b5b0e783a5c00db382463c39f70212215dab307e491ab21e664fcd312f6" Feb 26 15:15:14 crc kubenswrapper[4809]: I0226 15:15:14.271770 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" path="/var/lib/kubelet/pods/cef2909e-feed-48d2-a86f-c2fc57e2e8b0/volumes" Feb 26 15:15:41 crc kubenswrapper[4809]: I0226 15:15:41.794377 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:15:41 crc kubenswrapper[4809]: I0226 15:15:41.795303 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:15:41 crc kubenswrapper[4809]: I0226 15:15:41.795392 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:15:41 crc kubenswrapper[4809]: I0226 15:15:41.797056 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:15:41 crc kubenswrapper[4809]: I0226 15:15:41.797207 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1" gracePeriod=600 Feb 26 15:15:42 crc kubenswrapper[4809]: I0226 15:15:42.555299 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1" exitCode=0 Feb 26 15:15:42 crc kubenswrapper[4809]: I0226 15:15:42.555428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1"} Feb 26 15:15:42 crc kubenswrapper[4809]: I0226 15:15:42.555762 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26"} Feb 26 15:15:42 crc kubenswrapper[4809]: I0226 15:15:42.555787 4809 scope.go:117] "RemoveContainer" containerID="634f882a222e753cd4b1cc4e0add8ff7b28f470523bdba6741af283d3d053f1c" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.175390 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535316-56sk5"] Feb 26 15:16:00 crc kubenswrapper[4809]: E0226 15:16:00.176823 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823f865d-0e91-4975-8c47-bc6a61a1a027" containerName="collect-profiles" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.176845 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="823f865d-0e91-4975-8c47-bc6a61a1a027" containerName="collect-profiles" Feb 26 15:16:00 crc kubenswrapper[4809]: E0226 15:16:00.176881 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="extract-utilities" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.176897 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="extract-utilities" Feb 26 15:16:00 crc kubenswrapper[4809]: E0226 15:16:00.176921 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="extract-content" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.176934 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="extract-content" Feb 26 15:16:00 crc kubenswrapper[4809]: E0226 15:16:00.176966 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="registry-server" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.176978 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="registry-server" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.177410 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef2909e-feed-48d2-a86f-c2fc57e2e8b0" containerName="registry-server" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.177458 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="823f865d-0e91-4975-8c47-bc6a61a1a027" containerName="collect-profiles" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.178789 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.183091 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.183213 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.183387 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.189539 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535316-56sk5"] Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.369569 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44prq\" (UniqueName: \"kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq\") pod \"auto-csr-approver-29535316-56sk5\" (UID: \"6c8aa4f8-f5d6-4570-a388-281797d0184c\") " pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.475501 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44prq\" (UniqueName: \"kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq\") pod \"auto-csr-approver-29535316-56sk5\" (UID: \"6c8aa4f8-f5d6-4570-a388-281797d0184c\") " pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.501797 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44prq\" (UniqueName: \"kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq\") pod \"auto-csr-approver-29535316-56sk5\" (UID: \"6c8aa4f8-f5d6-4570-a388-281797d0184c\") " pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:00 crc kubenswrapper[4809]: I0226 15:16:00.513842 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:01 crc kubenswrapper[4809]: I0226 15:16:01.074287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535316-56sk5"] Feb 26 15:16:01 crc kubenswrapper[4809]: I0226 15:16:01.812122 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535316-56sk5" event={"ID":"6c8aa4f8-f5d6-4570-a388-281797d0184c","Type":"ContainerStarted","Data":"56b64c487b957de74cbb8b66d9cc6df8dc9500e2181f618199d930b1be364067"} Feb 26 15:16:02 crc kubenswrapper[4809]: I0226 15:16:02.839062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535316-56sk5" event={"ID":"6c8aa4f8-f5d6-4570-a388-281797d0184c","Type":"ContainerStarted","Data":"9c479b0221bfb1aebd8f2b1de626ecac0b4e10a30a0314474618ebcba94df4d7"} Feb 26 15:16:02 crc kubenswrapper[4809]: I0226 15:16:02.860344 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535316-56sk5" podStartSLOduration=1.524490343 podStartE2EDuration="2.860322592s" podCreationTimestamp="2026-02-26 15:16:00 +0000 UTC" firstStartedPulling="2026-02-26 15:16:01.063587678 +0000 UTC m=+3739.536908201" lastFinishedPulling="2026-02-26 15:16:02.399419887 +0000 UTC m=+3740.872740450" observedRunningTime="2026-02-26 15:16:02.853654231 +0000 UTC m=+3741.326974744" watchObservedRunningTime="2026-02-26 15:16:02.860322592 +0000 UTC m=+3741.333643115" Feb 26 15:16:03 crc kubenswrapper[4809]: I0226 15:16:03.850907 4809 generic.go:334] "Generic (PLEG): container finished" podID="6c8aa4f8-f5d6-4570-a388-281797d0184c" containerID="9c479b0221bfb1aebd8f2b1de626ecac0b4e10a30a0314474618ebcba94df4d7" exitCode=0 Feb 26 15:16:03 crc kubenswrapper[4809]: I0226 15:16:03.851029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535316-56sk5" event={"ID":"6c8aa4f8-f5d6-4570-a388-281797d0184c","Type":"ContainerDied","Data":"9c479b0221bfb1aebd8f2b1de626ecac0b4e10a30a0314474618ebcba94df4d7"} Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.274460 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.394332 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-wclfk"] Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.408954 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44prq\" (UniqueName: \"kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq\") pod \"6c8aa4f8-f5d6-4570-a388-281797d0184c\" (UID: \"6c8aa4f8-f5d6-4570-a388-281797d0184c\") " Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.415476 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535310-wclfk"] Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.421818 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq" (OuterVolumeSpecName: "kube-api-access-44prq") pod "6c8aa4f8-f5d6-4570-a388-281797d0184c" (UID: "6c8aa4f8-f5d6-4570-a388-281797d0184c"). InnerVolumeSpecName "kube-api-access-44prq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.512972 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44prq\" (UniqueName: \"kubernetes.io/projected/6c8aa4f8-f5d6-4570-a388-281797d0184c-kube-api-access-44prq\") on node \"crc\" DevicePath \"\"" Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.878377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535316-56sk5" event={"ID":"6c8aa4f8-f5d6-4570-a388-281797d0184c","Type":"ContainerDied","Data":"56b64c487b957de74cbb8b66d9cc6df8dc9500e2181f618199d930b1be364067"} Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.878421 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b64c487b957de74cbb8b66d9cc6df8dc9500e2181f618199d930b1be364067" Feb 26 15:16:05 crc kubenswrapper[4809]: I0226 15:16:05.878437 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535316-56sk5" Feb 26 15:16:06 crc kubenswrapper[4809]: I0226 15:16:06.274794 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2ba682-8912-4a3d-8631-6e459d37c59c" path="/var/lib/kubelet/pods/6f2ba682-8912-4a3d-8631-6e459d37c59c/volumes" Feb 26 15:16:13 crc kubenswrapper[4809]: I0226 15:16:13.751378 4809 scope.go:117] "RemoveContainer" containerID="5b3b089d1c56c07689ef004b1cd1d07cb134051956debbce9732b8d553c78eb8" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.178615 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535318-gdj6g"] Feb 26 15:18:00 crc kubenswrapper[4809]: E0226 15:18:00.181048 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8aa4f8-f5d6-4570-a388-281797d0184c" containerName="oc" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.181094 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8aa4f8-f5d6-4570-a388-281797d0184c" containerName="oc" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.181560 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c8aa4f8-f5d6-4570-a388-281797d0184c" containerName="oc" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.183085 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.187005 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.188265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.188290 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.201944 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535318-gdj6g"] Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.215065 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pqd\" (UniqueName: \"kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd\") pod \"auto-csr-approver-29535318-gdj6g\" (UID: \"4acd1010-3a83-49a0-b4c3-d13792e73fdd\") " pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.318254 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pqd\" (UniqueName: \"kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd\") pod \"auto-csr-approver-29535318-gdj6g\" (UID: \"4acd1010-3a83-49a0-b4c3-d13792e73fdd\") " pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.357728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pqd\" (UniqueName: \"kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd\") pod \"auto-csr-approver-29535318-gdj6g\" (UID: \"4acd1010-3a83-49a0-b4c3-d13792e73fdd\") " pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:00 crc kubenswrapper[4809]: I0226 15:18:00.513729 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:01 crc kubenswrapper[4809]: I0226 15:18:01.112274 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535318-gdj6g"] Feb 26 15:18:01 crc kubenswrapper[4809]: I0226 15:18:01.503692 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" event={"ID":"4acd1010-3a83-49a0-b4c3-d13792e73fdd","Type":"ContainerStarted","Data":"785acac3c7095fc5a398f26669e0dc9bf3155bc1daa8e5d5b0e7d9eeb4ac10ba"} Feb 26 15:18:03 crc kubenswrapper[4809]: I0226 15:18:03.533169 4809 generic.go:334] "Generic (PLEG): container finished" podID="4acd1010-3a83-49a0-b4c3-d13792e73fdd" containerID="6ac22ed53c37ecd80a88de56bbe5deb31296188adb3c896a5daff2fd6c6fa5b7" exitCode=0 Feb 26 15:18:03 crc kubenswrapper[4809]: I0226 15:18:03.533280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" event={"ID":"4acd1010-3a83-49a0-b4c3-d13792e73fdd","Type":"ContainerDied","Data":"6ac22ed53c37ecd80a88de56bbe5deb31296188adb3c896a5daff2fd6c6fa5b7"} Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.025712 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.145946 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74pqd\" (UniqueName: \"kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd\") pod \"4acd1010-3a83-49a0-b4c3-d13792e73fdd\" (UID: \"4acd1010-3a83-49a0-b4c3-d13792e73fdd\") " Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.152346 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd" (OuterVolumeSpecName: "kube-api-access-74pqd") pod "4acd1010-3a83-49a0-b4c3-d13792e73fdd" (UID: "4acd1010-3a83-49a0-b4c3-d13792e73fdd"). InnerVolumeSpecName "kube-api-access-74pqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.249045 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74pqd\" (UniqueName: \"kubernetes.io/projected/4acd1010-3a83-49a0-b4c3-d13792e73fdd-kube-api-access-74pqd\") on node \"crc\" DevicePath \"\"" Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.561082 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" event={"ID":"4acd1010-3a83-49a0-b4c3-d13792e73fdd","Type":"ContainerDied","Data":"785acac3c7095fc5a398f26669e0dc9bf3155bc1daa8e5d5b0e7d9eeb4ac10ba"} Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.561131 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785acac3c7095fc5a398f26669e0dc9bf3155bc1daa8e5d5b0e7d9eeb4ac10ba" Feb 26 15:18:05 crc kubenswrapper[4809]: I0226 15:18:05.561208 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535318-gdj6g" Feb 26 15:18:06 crc kubenswrapper[4809]: I0226 15:18:06.111297 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-j4qwp"] Feb 26 15:18:06 crc kubenswrapper[4809]: I0226 15:18:06.126566 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535312-j4qwp"] Feb 26 15:18:06 crc kubenswrapper[4809]: I0226 15:18:06.277222 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61967b6a-d1f9-4781-b179-c509f0049e9b" path="/var/lib/kubelet/pods/61967b6a-d1f9-4781-b179-c509f0049e9b/volumes" Feb 26 15:18:11 crc kubenswrapper[4809]: I0226 15:18:11.795865 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:18:11 crc kubenswrapper[4809]: I0226 15:18:11.796344 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:18:13 crc kubenswrapper[4809]: I0226 15:18:13.901236 4809 scope.go:117] "RemoveContainer" containerID="fb4d837be23525d7d92e25da41158e0b09cb8171a1ad8a28b1963ab250188352" Feb 26 15:18:41 crc kubenswrapper[4809]: I0226 15:18:41.794618 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:18:41 crc kubenswrapper[4809]: I0226 15:18:41.795221 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:19:11 crc kubenswrapper[4809]: I0226 15:19:11.794399 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:19:11 crc kubenswrapper[4809]: I0226 15:19:11.794907 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:19:11 crc kubenswrapper[4809]: I0226 15:19:11.794959 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:19:11 crc kubenswrapper[4809]: I0226 15:19:11.796066 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:19:11 crc kubenswrapper[4809]: I0226 15:19:11.796127 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" gracePeriod=600 Feb 26 15:19:11 crc kubenswrapper[4809]: E0226 15:19:11.921667 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:19:12 crc kubenswrapper[4809]: I0226 15:19:12.486617 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" exitCode=0 Feb 26 15:19:12 crc kubenswrapper[4809]: I0226 15:19:12.486749 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26"} Feb 26 15:19:12 crc kubenswrapper[4809]: I0226 15:19:12.487234 4809 scope.go:117] "RemoveContainer" containerID="518854c5b4b989fd57f4dfc85753b7a50a8880fabd2b2b8a7732fa79a468ebb1" Feb 26 15:19:12 crc kubenswrapper[4809]: I0226 15:19:12.488368 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:19:12 crc kubenswrapper[4809]: E0226 15:19:12.488890 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:19:25 crc kubenswrapper[4809]: I0226 15:19:25.258030 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:19:25 crc kubenswrapper[4809]: E0226 15:19:25.259606 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:19:36 crc kubenswrapper[4809]: I0226 15:19:36.256802 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:19:36 crc kubenswrapper[4809]: E0226 15:19:36.257695 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:19:47 crc kubenswrapper[4809]: I0226 15:19:47.257189 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:19:47 crc kubenswrapper[4809]: E0226 15:19:47.257899 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.161375 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535320-vd9cj"] Feb 26 15:20:00 crc kubenswrapper[4809]: E0226 15:20:00.162553 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acd1010-3a83-49a0-b4c3-d13792e73fdd" containerName="oc" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.162571 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acd1010-3a83-49a0-b4c3-d13792e73fdd" containerName="oc" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.162893 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4acd1010-3a83-49a0-b4c3-d13792e73fdd" containerName="oc" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.163903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.166156 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.166156 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.171738 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.196447 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535320-vd9cj"] Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.247193 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rk2\" (UniqueName: \"kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2\") pod \"auto-csr-approver-29535320-vd9cj\" (UID: \"8bc20655-012e-4602-8531-b90d457756ef\") " pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.349584 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rk2\" (UniqueName: \"kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2\") pod \"auto-csr-approver-29535320-vd9cj\" (UID: \"8bc20655-012e-4602-8531-b90d457756ef\") " pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.376378 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rk2\" (UniqueName: \"kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2\") pod \"auto-csr-approver-29535320-vd9cj\" (UID: \"8bc20655-012e-4602-8531-b90d457756ef\") " pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:00 crc kubenswrapper[4809]: I0226 15:20:00.487883 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:01 crc kubenswrapper[4809]: I0226 15:20:01.041284 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535320-vd9cj"] Feb 26 15:20:01 crc kubenswrapper[4809]: W0226 15:20:01.043598 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bc20655_012e_4602_8531_b90d457756ef.slice/crio-c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16 WatchSource:0}: Error finding container c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16: Status 404 returned error can't find the container with id c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16 Feb 26 15:20:01 crc kubenswrapper[4809]: I0226 15:20:01.047753 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:20:01 crc kubenswrapper[4809]: I0226 15:20:01.165201 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" event={"ID":"8bc20655-012e-4602-8531-b90d457756ef","Type":"ContainerStarted","Data":"c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16"} Feb 26 15:20:02 crc kubenswrapper[4809]: I0226 15:20:02.267318 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:20:02 crc kubenswrapper[4809]: E0226 15:20:02.268024 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:03 crc kubenswrapper[4809]: I0226 15:20:03.200196 4809 generic.go:334] "Generic (PLEG): container finished" podID="8bc20655-012e-4602-8531-b90d457756ef" containerID="922d85b69a824daa64724de71491525b4be8c4ddac2f49c0c7af108c2b0115be" exitCode=0 Feb 26 15:20:03 crc kubenswrapper[4809]: I0226 15:20:03.200254 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" event={"ID":"8bc20655-012e-4602-8531-b90d457756ef","Type":"ContainerDied","Data":"922d85b69a824daa64724de71491525b4be8c4ddac2f49c0c7af108c2b0115be"} Feb 26 15:20:04 crc kubenswrapper[4809]: I0226 15:20:04.697654 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:04 crc kubenswrapper[4809]: I0226 15:20:04.796393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9rk2\" (UniqueName: \"kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2\") pod \"8bc20655-012e-4602-8531-b90d457756ef\" (UID: \"8bc20655-012e-4602-8531-b90d457756ef\") " Feb 26 15:20:04 crc kubenswrapper[4809]: I0226 15:20:04.805319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2" (OuterVolumeSpecName: "kube-api-access-l9rk2") pod "8bc20655-012e-4602-8531-b90d457756ef" (UID: "8bc20655-012e-4602-8531-b90d457756ef"). InnerVolumeSpecName "kube-api-access-l9rk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:20:04 crc kubenswrapper[4809]: I0226 15:20:04.899613 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9rk2\" (UniqueName: \"kubernetes.io/projected/8bc20655-012e-4602-8531-b90d457756ef-kube-api-access-l9rk2\") on node \"crc\" DevicePath \"\"" Feb 26 15:20:05 crc kubenswrapper[4809]: I0226 15:20:05.228096 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" event={"ID":"8bc20655-012e-4602-8531-b90d457756ef","Type":"ContainerDied","Data":"c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16"} Feb 26 15:20:05 crc kubenswrapper[4809]: I0226 15:20:05.228137 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9550bd34b0eb251b399fb77f86681803ab8aae4988c41d5d522981ac23ebe16" Feb 26 15:20:05 crc kubenswrapper[4809]: I0226 15:20:05.228169 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535320-vd9cj" Feb 26 15:20:05 crc kubenswrapper[4809]: I0226 15:20:05.778467 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-fts7l"] Feb 26 15:20:05 crc kubenswrapper[4809]: I0226 15:20:05.792525 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535314-fts7l"] Feb 26 15:20:06 crc kubenswrapper[4809]: I0226 15:20:06.274633 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c749b7d7-5b88-487f-9673-9e5ccee431fa" path="/var/lib/kubelet/pods/c749b7d7-5b88-487f-9673-9e5ccee431fa/volumes" Feb 26 15:20:14 crc kubenswrapper[4809]: I0226 15:20:14.055707 4809 scope.go:117] "RemoveContainer" containerID="8b132cf59776aa053c26c690d3ec9b52dc5ba09a529d1094f596ffe82bc48363" Feb 26 15:20:14 crc kubenswrapper[4809]: I0226 15:20:14.257229 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:20:14 crc kubenswrapper[4809]: E0226 15:20:14.258106 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:25 crc kubenswrapper[4809]: I0226 15:20:25.259616 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:20:25 crc kubenswrapper[4809]: E0226 15:20:25.260825 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:39 crc kubenswrapper[4809]: I0226 15:20:39.257135 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:20:39 crc kubenswrapper[4809]: E0226 15:20:39.257873 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.472207 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:20:47 crc kubenswrapper[4809]: E0226 15:20:47.473364 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc20655-012e-4602-8531-b90d457756ef" containerName="oc" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.473377 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc20655-012e-4602-8531-b90d457756ef" containerName="oc" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.473626 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc20655-012e-4602-8531-b90d457756ef" containerName="oc" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.475174 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.491053 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.562109 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.563072 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.563180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5mr\" (UniqueName: \"kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.664965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.665049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5mr\" (UniqueName: \"kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.665098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.666115 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.666331 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.693737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5mr\" (UniqueName: \"kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr\") pod \"redhat-operators-c26t4\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:47 crc kubenswrapper[4809]: I0226 15:20:47.859688 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:20:48 crc kubenswrapper[4809]: I0226 15:20:48.343293 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:20:48 crc kubenswrapper[4809]: I0226 15:20:48.828002 4809 generic.go:334] "Generic (PLEG): container finished" podID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerID="7d23081e8b528ef66b8c219c58fc32c935d63d8c8db74825b4519e1ce4d4b736" exitCode=0 Feb 26 15:20:48 crc kubenswrapper[4809]: I0226 15:20:48.828105 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerDied","Data":"7d23081e8b528ef66b8c219c58fc32c935d63d8c8db74825b4519e1ce4d4b736"} Feb 26 15:20:48 crc kubenswrapper[4809]: I0226 15:20:48.828129 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerStarted","Data":"48a68ab12ee7133d0114960e537ab7515924fe427cbb3ebbb91a5fc2e5800670"} Feb 26 15:20:50 crc kubenswrapper[4809]: I0226 15:20:50.853419 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerStarted","Data":"9f311b63d31c5bef240ef57f0c12aa2b3eac1a8a0820b41ab3fe18c61aa41d7e"} Feb 26 15:20:52 crc kubenswrapper[4809]: I0226 15:20:52.264183 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:20:52 crc kubenswrapper[4809]: E0226 15:20:52.264670 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:20:59 crc kubenswrapper[4809]: I0226 15:20:59.960050 4809 generic.go:334] "Generic (PLEG): container finished" podID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerID="9f311b63d31c5bef240ef57f0c12aa2b3eac1a8a0820b41ab3fe18c61aa41d7e" exitCode=0 Feb 26 15:20:59 crc kubenswrapper[4809]: I0226 15:20:59.960109 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerDied","Data":"9f311b63d31c5bef240ef57f0c12aa2b3eac1a8a0820b41ab3fe18c61aa41d7e"} Feb 26 15:21:00 crc kubenswrapper[4809]: I0226 15:21:00.972314 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerStarted","Data":"469c5e7b7cb9ea7204b9f00668c0c339382584ac67cc450eceb006f6c4d16099"} Feb 26 15:21:00 crc kubenswrapper[4809]: I0226 15:21:00.997159 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c26t4" podStartSLOduration=2.183217068 podStartE2EDuration="13.997139163s" podCreationTimestamp="2026-02-26 15:20:47 +0000 UTC" firstStartedPulling="2026-02-26 15:20:48.830776334 +0000 UTC m=+4027.304096867" lastFinishedPulling="2026-02-26 15:21:00.644698419 +0000 UTC m=+4039.118018962" observedRunningTime="2026-02-26 15:21:00.99669031 +0000 UTC m=+4039.470010863" watchObservedRunningTime="2026-02-26 15:21:00.997139163 +0000 UTC m=+4039.470459686" Feb 26 15:21:07 crc kubenswrapper[4809]: I0226 15:21:07.257262 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:21:07 crc kubenswrapper[4809]: E0226 15:21:07.258324 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:21:07 crc kubenswrapper[4809]: I0226 15:21:07.860824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:07 crc kubenswrapper[4809]: I0226 15:21:07.861107 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:08 crc kubenswrapper[4809]: I0226 15:21:08.937304 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c26t4" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" probeResult="failure" output=< Feb 26 15:21:08 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:21:08 crc kubenswrapper[4809]: > Feb 26 15:21:18 crc kubenswrapper[4809]: I0226 15:21:18.915554 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c26t4" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" probeResult="failure" output=< Feb 26 15:21:18 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:21:18 crc kubenswrapper[4809]: > Feb 26 15:21:20 crc kubenswrapper[4809]: I0226 15:21:20.258146 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:21:20 crc kubenswrapper[4809]: E0226 15:21:20.258909 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:21:28 crc kubenswrapper[4809]: I0226 15:21:28.484828 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:28 crc kubenswrapper[4809]: I0226 15:21:28.535374 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:28 crc kubenswrapper[4809]: I0226 15:21:28.724669 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:21:29 crc kubenswrapper[4809]: E0226 15:21:29.381650 4809 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:34078->38.102.83.74:34305: write tcp 38.102.83.74:34078->38.102.83.74:34305: write: broken pipe Feb 26 15:21:30 crc kubenswrapper[4809]: I0226 15:21:30.330704 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c26t4" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" containerID="cri-o://469c5e7b7cb9ea7204b9f00668c0c339382584ac67cc450eceb006f6c4d16099" gracePeriod=2 Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.389083 4809 generic.go:334] "Generic (PLEG): container finished" podID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerID="469c5e7b7cb9ea7204b9f00668c0c339382584ac67cc450eceb006f6c4d16099" exitCode=0 Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.389171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerDied","Data":"469c5e7b7cb9ea7204b9f00668c0c339382584ac67cc450eceb006f6c4d16099"} Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.766829 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.835276 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content\") pod \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.835848 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities\") pod \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.835897 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk5mr\" (UniqueName: \"kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr\") pod \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\" (UID: \"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d\") " Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.836855 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities" (OuterVolumeSpecName: "utilities") pod "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" (UID: "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.846294 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr" (OuterVolumeSpecName: "kube-api-access-wk5mr") pod "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" (UID: "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d"). InnerVolumeSpecName "kube-api-access-wk5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.939369 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.939405 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk5mr\" (UniqueName: \"kubernetes.io/projected/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-kube-api-access-wk5mr\") on node \"crc\" DevicePath \"\"" Feb 26 15:21:31 crc kubenswrapper[4809]: I0226 15:21:31.970129 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" (UID: "53d5ea2f-61b3-4e02-aa4e-ed301f2a521d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.041573 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.264402 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:21:32 crc kubenswrapper[4809]: E0226 15:21:32.264747 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.414969 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c26t4" event={"ID":"53d5ea2f-61b3-4e02-aa4e-ed301f2a521d","Type":"ContainerDied","Data":"48a68ab12ee7133d0114960e537ab7515924fe427cbb3ebbb91a5fc2e5800670"} Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.415043 4809 scope.go:117] "RemoveContainer" containerID="469c5e7b7cb9ea7204b9f00668c0c339382584ac67cc450eceb006f6c4d16099" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.415053 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c26t4" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.448465 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.455300 4809 scope.go:117] "RemoveContainer" containerID="9f311b63d31c5bef240ef57f0c12aa2b3eac1a8a0820b41ab3fe18c61aa41d7e" Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.459464 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c26t4"] Feb 26 15:21:32 crc kubenswrapper[4809]: I0226 15:21:32.492184 4809 scope.go:117] "RemoveContainer" containerID="7d23081e8b528ef66b8c219c58fc32c935d63d8c8db74825b4519e1ce4d4b736" Feb 26 15:21:34 crc kubenswrapper[4809]: I0226 15:21:34.276851 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" path="/var/lib/kubelet/pods/53d5ea2f-61b3-4e02-aa4e-ed301f2a521d/volumes" Feb 26 15:21:47 crc kubenswrapper[4809]: I0226 15:21:47.258390 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:21:47 crc kubenswrapper[4809]: E0226 15:21:47.259479 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:21:52 crc kubenswrapper[4809]: I0226 15:21:52.741056 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:21:52 crc kubenswrapper[4809]: I0226 15:21:52.742936 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:21:59 crc kubenswrapper[4809]: I0226 15:21:59.257098 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:21:59 crc kubenswrapper[4809]: E0226 15:21:59.257827 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.148842 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535322-jm5hw"] Feb 26 15:22:00 crc kubenswrapper[4809]: E0226 15:22:00.150167 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="extract-utilities" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.150219 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="extract-utilities" Feb 26 15:22:00 crc kubenswrapper[4809]: E0226 15:22:00.150299 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.150319 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" Feb 26 15:22:00 crc kubenswrapper[4809]: E0226 15:22:00.150367 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="extract-content" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.150385 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="extract-content" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.150986 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d5ea2f-61b3-4e02-aa4e-ed301f2a521d" containerName="registry-server" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.152659 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.156402 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.156625 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.157676 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.172467 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535322-jm5hw"] Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.216281 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmtx\" (UniqueName: \"kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx\") pod \"auto-csr-approver-29535322-jm5hw\" (UID: \"2891098c-f479-47bc-b960-799778a535c9\") " pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.319587 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnmtx\" (UniqueName: \"kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx\") pod \"auto-csr-approver-29535322-jm5hw\" (UID: \"2891098c-f479-47bc-b960-799778a535c9\") " pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.348791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnmtx\" (UniqueName: \"kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx\") pod \"auto-csr-approver-29535322-jm5hw\" (UID: \"2891098c-f479-47bc-b960-799778a535c9\") " pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:00 crc kubenswrapper[4809]: I0226 15:22:00.479800 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:01 crc kubenswrapper[4809]: W0226 15:22:01.030986 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2891098c_f479_47bc_b960_799778a535c9.slice/crio-411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f WatchSource:0}: Error finding container 411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f: Status 404 returned error can't find the container with id 411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f Feb 26 15:22:01 crc kubenswrapper[4809]: I0226 15:22:01.044883 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535322-jm5hw"] Feb 26 15:22:01 crc kubenswrapper[4809]: I0226 15:22:01.870866 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" event={"ID":"2891098c-f479-47bc-b960-799778a535c9","Type":"ContainerStarted","Data":"411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f"} Feb 26 15:22:02 crc kubenswrapper[4809]: I0226 15:22:02.883199 4809 generic.go:334] "Generic (PLEG): container finished" podID="2891098c-f479-47bc-b960-799778a535c9" containerID="4ff5d00bd13b0a41877d93a2c645eb862c0321976a61a1519326cdb23c564116" exitCode=0 Feb 26 15:22:02 crc kubenswrapper[4809]: I0226 15:22:02.883344 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" event={"ID":"2891098c-f479-47bc-b960-799778a535c9","Type":"ContainerDied","Data":"4ff5d00bd13b0a41877d93a2c645eb862c0321976a61a1519326cdb23c564116"} Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.319845 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.454631 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnmtx\" (UniqueName: \"kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx\") pod \"2891098c-f479-47bc-b960-799778a535c9\" (UID: \"2891098c-f479-47bc-b960-799778a535c9\") " Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.466067 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx" (OuterVolumeSpecName: "kube-api-access-vnmtx") pod "2891098c-f479-47bc-b960-799778a535c9" (UID: "2891098c-f479-47bc-b960-799778a535c9"). InnerVolumeSpecName "kube-api-access-vnmtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.557504 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnmtx\" (UniqueName: \"kubernetes.io/projected/2891098c-f479-47bc-b960-799778a535c9-kube-api-access-vnmtx\") on node \"crc\" DevicePath \"\"" Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.913004 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" event={"ID":"2891098c-f479-47bc-b960-799778a535c9","Type":"ContainerDied","Data":"411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f"} Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.913281 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411f753edd963f646d1dbe6a08251f7976ea274da022034a0cdf9485cea1f64f" Feb 26 15:22:04 crc kubenswrapper[4809]: I0226 15:22:04.913132 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535322-jm5hw" Feb 26 15:22:05 crc kubenswrapper[4809]: I0226 15:22:05.426629 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535316-56sk5"] Feb 26 15:22:05 crc kubenswrapper[4809]: I0226 15:22:05.437379 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535316-56sk5"] Feb 26 15:22:06 crc kubenswrapper[4809]: I0226 15:22:06.274348 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8aa4f8-f5d6-4570-a388-281797d0184c" path="/var/lib/kubelet/pods/6c8aa4f8-f5d6-4570-a388-281797d0184c/volumes" Feb 26 15:22:10 crc kubenswrapper[4809]: I0226 15:22:10.258121 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:22:10 crc kubenswrapper[4809]: E0226 15:22:10.258668 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:22:14 crc kubenswrapper[4809]: I0226 15:22:14.170195 4809 scope.go:117] "RemoveContainer" containerID="9c479b0221bfb1aebd8f2b1de626ecac0b4e10a30a0314474618ebcba94df4d7" Feb 26 15:22:23 crc kubenswrapper[4809]: I0226 15:22:23.257304 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:22:23 crc kubenswrapper[4809]: E0226 15:22:23.257985 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:22:37 crc kubenswrapper[4809]: I0226 15:22:37.257711 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:22:37 crc kubenswrapper[4809]: E0226 15:22:37.258852 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:22:50 crc kubenswrapper[4809]: I0226 15:22:50.256950 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:22:50 crc kubenswrapper[4809]: E0226 15:22:50.257812 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:23:01 crc kubenswrapper[4809]: I0226 15:23:01.257059 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:23:01 crc kubenswrapper[4809]: E0226 15:23:01.257940 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.707469 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:08 crc kubenswrapper[4809]: E0226 15:23:08.708739 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2891098c-f479-47bc-b960-799778a535c9" containerName="oc" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.708759 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2891098c-f479-47bc-b960-799778a535c9" containerName="oc" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.709177 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2891098c-f479-47bc-b960-799778a535c9" containerName="oc" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.711945 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.721642 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.807938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.808470 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84zx\" (UniqueName: \"kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.808621 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.911617 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84zx\" (UniqueName: \"kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.911924 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.912003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.912476 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.912510 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:08 crc kubenswrapper[4809]: I0226 15:23:08.932046 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84zx\" (UniqueName: \"kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx\") pod \"community-operators-vbqhf\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:09 crc kubenswrapper[4809]: I0226 15:23:09.036562 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:09 crc kubenswrapper[4809]: I0226 15:23:09.587973 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:09 crc kubenswrapper[4809]: I0226 15:23:09.777851 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerStarted","Data":"70d2cf078e4932254c635bae621769517eb626df6fe026d7d9a6311d098343e1"} Feb 26 15:23:10 crc kubenswrapper[4809]: I0226 15:23:10.791743 4809 generic.go:334] "Generic (PLEG): container finished" podID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerID="6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16" exitCode=0 Feb 26 15:23:10 crc kubenswrapper[4809]: I0226 15:23:10.791813 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerDied","Data":"6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16"} Feb 26 15:23:12 crc kubenswrapper[4809]: I0226 15:23:12.886760 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerStarted","Data":"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5"} Feb 26 15:23:13 crc kubenswrapper[4809]: I0226 15:23:13.916777 4809 generic.go:334] "Generic (PLEG): container finished" podID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerID="68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5" exitCode=0 Feb 26 15:23:13 crc kubenswrapper[4809]: I0226 15:23:13.916871 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerDied","Data":"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5"} Feb 26 15:23:14 crc kubenswrapper[4809]: I0226 15:23:14.257718 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:23:14 crc kubenswrapper[4809]: E0226 15:23:14.258150 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:23:14 crc kubenswrapper[4809]: I0226 15:23:14.932713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerStarted","Data":"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877"} Feb 26 15:23:14 crc kubenswrapper[4809]: I0226 15:23:14.964068 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbqhf" podStartSLOduration=3.3984542859999998 podStartE2EDuration="6.964042896s" podCreationTimestamp="2026-02-26 15:23:08 +0000 UTC" firstStartedPulling="2026-02-26 15:23:10.796125007 +0000 UTC m=+4169.269445530" lastFinishedPulling="2026-02-26 15:23:14.361713597 +0000 UTC m=+4172.835034140" observedRunningTime="2026-02-26 15:23:14.951335796 +0000 UTC m=+4173.424656319" watchObservedRunningTime="2026-02-26 15:23:14.964042896 +0000 UTC m=+4173.437363419" Feb 26 15:23:19 crc kubenswrapper[4809]: I0226 15:23:19.037181 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:19 crc kubenswrapper[4809]: I0226 15:23:19.037791 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:19 crc kubenswrapper[4809]: I0226 15:23:19.117670 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:20 crc kubenswrapper[4809]: I0226 15:23:20.081908 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:20 crc kubenswrapper[4809]: I0226 15:23:20.169005 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.023594 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbqhf" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="registry-server" containerID="cri-o://f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877" gracePeriod=2 Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.754561 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.877570 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content\") pod \"901c8a5a-2766-4b60-ab45-16b2953abd63\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.877684 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x84zx\" (UniqueName: \"kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx\") pod \"901c8a5a-2766-4b60-ab45-16b2953abd63\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.877806 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities\") pod \"901c8a5a-2766-4b60-ab45-16b2953abd63\" (UID: \"901c8a5a-2766-4b60-ab45-16b2953abd63\") " Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.879003 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities" (OuterVolumeSpecName: "utilities") pod "901c8a5a-2766-4b60-ab45-16b2953abd63" (UID: "901c8a5a-2766-4b60-ab45-16b2953abd63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.885972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx" (OuterVolumeSpecName: "kube-api-access-x84zx") pod "901c8a5a-2766-4b60-ab45-16b2953abd63" (UID: "901c8a5a-2766-4b60-ab45-16b2953abd63"). InnerVolumeSpecName "kube-api-access-x84zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.933999 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901c8a5a-2766-4b60-ab45-16b2953abd63" (UID: "901c8a5a-2766-4b60-ab45-16b2953abd63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.980918 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.980973 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901c8a5a-2766-4b60-ab45-16b2953abd63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:23:22 crc kubenswrapper[4809]: I0226 15:23:22.980993 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x84zx\" (UniqueName: \"kubernetes.io/projected/901c8a5a-2766-4b60-ab45-16b2953abd63-kube-api-access-x84zx\") on node \"crc\" DevicePath \"\"" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.038405 4809 generic.go:334] "Generic (PLEG): container finished" podID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerID="f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877" exitCode=0 Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.038463 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerDied","Data":"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877"} Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.038844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqhf" event={"ID":"901c8a5a-2766-4b60-ab45-16b2953abd63","Type":"ContainerDied","Data":"70d2cf078e4932254c635bae621769517eb626df6fe026d7d9a6311d098343e1"} Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.038880 4809 scope.go:117] "RemoveContainer" containerID="f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.038478 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqhf" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.068226 4809 scope.go:117] "RemoveContainer" containerID="68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.091885 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.101717 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbqhf"] Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.108842 4809 scope.go:117] "RemoveContainer" containerID="6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.176636 4809 scope.go:117] "RemoveContainer" containerID="f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877" Feb 26 15:23:23 crc kubenswrapper[4809]: E0226 15:23:23.177229 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877\": container with ID starting with f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877 not found: ID does not exist" containerID="f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.177335 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877"} err="failed to get container status \"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877\": rpc error: code = NotFound desc = could not find container \"f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877\": container with ID starting with f5eda3a9240b808219f6b5475f6245848a773366674895821f8f4649bd3d8877 not found: ID does not exist" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.177428 4809 scope.go:117] "RemoveContainer" containerID="68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5" Feb 26 15:23:23 crc kubenswrapper[4809]: E0226 15:23:23.177972 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5\": container with ID starting with 68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5 not found: ID does not exist" containerID="68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.178007 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5"} err="failed to get container status \"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5\": rpc error: code = NotFound desc = could not find container \"68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5\": container with ID starting with 68f6ad323f86221386f78b1036aeaf0dbb738389d96b0889977ddd7a1477aaa5 not found: ID does not exist" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.178042 4809 scope.go:117] "RemoveContainer" containerID="6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16" Feb 26 15:23:23 crc kubenswrapper[4809]: E0226 15:23:23.178295 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16\": container with ID starting with 6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16 not found: ID does not exist" containerID="6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16" Feb 26 15:23:23 crc kubenswrapper[4809]: I0226 15:23:23.178383 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16"} err="failed to get container status \"6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16\": rpc error: code = NotFound desc = could not find container \"6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16\": container with ID starting with 6ee532a3c26b8c715c7860adb877365831457e1d61dbd2b2df034975071ecc16 not found: ID does not exist" Feb 26 15:23:24 crc kubenswrapper[4809]: I0226 15:23:24.278254 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" path="/var/lib/kubelet/pods/901c8a5a-2766-4b60-ab45-16b2953abd63/volumes" Feb 26 15:23:26 crc kubenswrapper[4809]: I0226 15:23:26.262386 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:23:26 crc kubenswrapper[4809]: E0226 15:23:26.264144 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:23:41 crc kubenswrapper[4809]: I0226 15:23:41.256836 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:23:41 crc kubenswrapper[4809]: E0226 15:23:41.259452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:23:53 crc kubenswrapper[4809]: I0226 15:23:53.257135 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:23:53 crc kubenswrapper[4809]: E0226 15:23:53.257947 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.149754 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535324-fkljv"] Feb 26 15:24:00 crc kubenswrapper[4809]: E0226 15:24:00.150803 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="extract-utilities" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.150816 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="extract-utilities" Feb 26 15:24:00 crc kubenswrapper[4809]: E0226 15:24:00.150846 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="extract-content" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.150852 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="extract-content" Feb 26 15:24:00 crc kubenswrapper[4809]: E0226 15:24:00.150871 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="registry-server" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.150878 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="registry-server" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.151108 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="901c8a5a-2766-4b60-ab45-16b2953abd63" containerName="registry-server" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.151945 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.153888 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.155962 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.171581 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.219969 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535324-fkljv"] Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.232975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8zj\" (UniqueName: \"kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj\") pod \"auto-csr-approver-29535324-fkljv\" (UID: \"488370bb-24aa-4662-be3e-861af2385c7e\") " pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.335984 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8zj\" (UniqueName: \"kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj\") pod \"auto-csr-approver-29535324-fkljv\" (UID: \"488370bb-24aa-4662-be3e-861af2385c7e\") " pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.361130 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8zj\" (UniqueName: \"kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj\") pod \"auto-csr-approver-29535324-fkljv\" (UID: \"488370bb-24aa-4662-be3e-861af2385c7e\") " pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.517818 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:00 crc kubenswrapper[4809]: I0226 15:24:00.998633 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535324-fkljv"] Feb 26 15:24:01 crc kubenswrapper[4809]: I0226 15:24:01.495598 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535324-fkljv" event={"ID":"488370bb-24aa-4662-be3e-861af2385c7e","Type":"ContainerStarted","Data":"07823d83a5cb147dc82655d80080d4f5f954e211fa0b1e24bbf018d4a3c2048b"} Feb 26 15:24:03 crc kubenswrapper[4809]: I0226 15:24:03.565691 4809 generic.go:334] "Generic (PLEG): container finished" podID="488370bb-24aa-4662-be3e-861af2385c7e" containerID="58b43acc38e743212deece42d52c322200f862f022eeebd6ba4a0a6dbd78dbf3" exitCode=0 Feb 26 15:24:03 crc kubenswrapper[4809]: I0226 15:24:03.566178 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535324-fkljv" event={"ID":"488370bb-24aa-4662-be3e-861af2385c7e","Type":"ContainerDied","Data":"58b43acc38e743212deece42d52c322200f862f022eeebd6ba4a0a6dbd78dbf3"} Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.071577 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.184736 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8zj\" (UniqueName: \"kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj\") pod \"488370bb-24aa-4662-be3e-861af2385c7e\" (UID: \"488370bb-24aa-4662-be3e-861af2385c7e\") " Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.192176 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj" (OuterVolumeSpecName: "kube-api-access-hn8zj") pod "488370bb-24aa-4662-be3e-861af2385c7e" (UID: "488370bb-24aa-4662-be3e-861af2385c7e"). InnerVolumeSpecName "kube-api-access-hn8zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.256763 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:24:05 crc kubenswrapper[4809]: E0226 15:24:05.257229 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.288180 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8zj\" (UniqueName: \"kubernetes.io/projected/488370bb-24aa-4662-be3e-861af2385c7e-kube-api-access-hn8zj\") on node \"crc\" DevicePath \"\"" Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.596050 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535324-fkljv" event={"ID":"488370bb-24aa-4662-be3e-861af2385c7e","Type":"ContainerDied","Data":"07823d83a5cb147dc82655d80080d4f5f954e211fa0b1e24bbf018d4a3c2048b"} Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.596091 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07823d83a5cb147dc82655d80080d4f5f954e211fa0b1e24bbf018d4a3c2048b" Feb 26 15:24:05 crc kubenswrapper[4809]: I0226 15:24:05.596183 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535324-fkljv" Feb 26 15:24:06 crc kubenswrapper[4809]: I0226 15:24:06.156453 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535318-gdj6g"] Feb 26 15:24:06 crc kubenswrapper[4809]: I0226 15:24:06.174908 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535318-gdj6g"] Feb 26 15:24:06 crc kubenswrapper[4809]: I0226 15:24:06.276540 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4acd1010-3a83-49a0-b4c3-d13792e73fdd" path="/var/lib/kubelet/pods/4acd1010-3a83-49a0-b4c3-d13792e73fdd/volumes" Feb 26 15:24:14 crc kubenswrapper[4809]: I0226 15:24:14.341727 4809 scope.go:117] "RemoveContainer" containerID="6ac22ed53c37ecd80a88de56bbe5deb31296188adb3c896a5daff2fd6c6fa5b7" Feb 26 15:24:20 crc kubenswrapper[4809]: I0226 15:24:20.258491 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:24:20 crc kubenswrapper[4809]: I0226 15:24:20.865302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c"} Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.118412 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:25:57 crc kubenswrapper[4809]: E0226 15:25:57.119323 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488370bb-24aa-4662-be3e-861af2385c7e" containerName="oc" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.119335 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="488370bb-24aa-4662-be3e-861af2385c7e" containerName="oc" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.119605 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="488370bb-24aa-4662-be3e-861af2385c7e" containerName="oc" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.121280 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.139635 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.237321 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.237744 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkcj\" (UniqueName: \"kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.238034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.340608 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fkcj\" (UniqueName: \"kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.340734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.340822 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.341310 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.341616 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:57 crc kubenswrapper[4809]: I0226 15:25:57.940085 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fkcj\" (UniqueName: \"kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj\") pod \"certified-operators-r8jpb\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:58 crc kubenswrapper[4809]: I0226 15:25:58.060126 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:25:58 crc kubenswrapper[4809]: I0226 15:25:58.529845 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:25:59 crc kubenswrapper[4809]: I0226 15:25:59.096655 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5794423-08dd-4076-b8aa-62955203f9dd" containerID="4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818" exitCode=0 Feb 26 15:25:59 crc kubenswrapper[4809]: I0226 15:25:59.096790 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerDied","Data":"4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818"} Feb 26 15:25:59 crc kubenswrapper[4809]: I0226 15:25:59.097214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerStarted","Data":"7b6735103794c4fba2613fd15d9a3c22ad80cfe0004e81ca1b443236b884bbb2"} Feb 26 15:25:59 crc kubenswrapper[4809]: I0226 15:25:59.101875 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.190092 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535326-dqq4l"] Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.192492 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.199473 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.199724 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.199836 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.223552 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstr2\" (UniqueName: \"kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2\") pod \"auto-csr-approver-29535326-dqq4l\" (UID: \"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c\") " pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.235887 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535326-dqq4l"] Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.326684 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstr2\" (UniqueName: \"kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2\") pod \"auto-csr-approver-29535326-dqq4l\" (UID: \"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c\") " pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.344601 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstr2\" (UniqueName: \"kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2\") pod \"auto-csr-approver-29535326-dqq4l\" (UID: \"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c\") " pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.625063 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.904139 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.907571 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:00 crc kubenswrapper[4809]: I0226 15:26:00.919290 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.051551 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4f74\" (UniqueName: \"kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.051787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.051828 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.107925 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535326-dqq4l"] Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.132853 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerStarted","Data":"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7"} Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.154339 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4f74\" (UniqueName: \"kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.154505 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.154535 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.155071 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.155148 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.173125 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4f74\" (UniqueName: \"kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74\") pod \"redhat-marketplace-krv7h\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.234352 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:01 crc kubenswrapper[4809]: I0226 15:26:01.738728 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.146242 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" event={"ID":"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c","Type":"ContainerStarted","Data":"bcf2545b594dba19a7728f5346e08775e7b09f6410c2cfccae4f3f867543ddad"} Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.148856 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5794423-08dd-4076-b8aa-62955203f9dd" containerID="0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7" exitCode=0 Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.148943 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerDied","Data":"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7"} Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.150689 4809 generic.go:334] "Generic (PLEG): container finished" podID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerID="cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3" exitCode=0 Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.150722 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerDied","Data":"cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3"} Feb 26 15:26:02 crc kubenswrapper[4809]: I0226 15:26:02.150756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerStarted","Data":"9ae1f847d32187a7a5c44f673730e529d829aa2cabb9473e669aeb413f728a93"} Feb 26 15:26:03 crc kubenswrapper[4809]: I0226 15:26:03.168716 4809 generic.go:334] "Generic (PLEG): container finished" podID="cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" containerID="2d8a0aa80ae630416b5bd50d429f1c91b87ea99c8707eaa42592c9dc6916f570" exitCode=0 Feb 26 15:26:03 crc kubenswrapper[4809]: I0226 15:26:03.169120 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" event={"ID":"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c","Type":"ContainerDied","Data":"2d8a0aa80ae630416b5bd50d429f1c91b87ea99c8707eaa42592c9dc6916f570"} Feb 26 15:26:03 crc kubenswrapper[4809]: I0226 15:26:03.172639 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerStarted","Data":"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71"} Feb 26 15:26:03 crc kubenswrapper[4809]: I0226 15:26:03.216580 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r8jpb" podStartSLOduration=2.748081699 podStartE2EDuration="6.216555912s" podCreationTimestamp="2026-02-26 15:25:57 +0000 UTC" firstStartedPulling="2026-02-26 15:25:59.101420682 +0000 UTC m=+4337.574741245" lastFinishedPulling="2026-02-26 15:26:02.569894935 +0000 UTC m=+4341.043215458" observedRunningTime="2026-02-26 15:26:03.21188461 +0000 UTC m=+4341.685205133" watchObservedRunningTime="2026-02-26 15:26:03.216555912 +0000 UTC m=+4341.689876445" Feb 26 15:26:04 crc kubenswrapper[4809]: I0226 15:26:04.184950 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerStarted","Data":"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb"} Feb 26 15:26:04 crc kubenswrapper[4809]: I0226 15:26:04.693088 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:04 crc kubenswrapper[4809]: I0226 15:26:04.875253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstr2\" (UniqueName: \"kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2\") pod \"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c\" (UID: \"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c\") " Feb 26 15:26:04 crc kubenswrapper[4809]: I0226 15:26:04.885266 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2" (OuterVolumeSpecName: "kube-api-access-zstr2") pod "cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" (UID: "cb48696c-ecbb-46d7-90a0-2fadb5c3b15c"). InnerVolumeSpecName "kube-api-access-zstr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:26:04 crc kubenswrapper[4809]: I0226 15:26:04.978245 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstr2\" (UniqueName: \"kubernetes.io/projected/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c-kube-api-access-zstr2\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.199941 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" event={"ID":"cb48696c-ecbb-46d7-90a0-2fadb5c3b15c","Type":"ContainerDied","Data":"bcf2545b594dba19a7728f5346e08775e7b09f6410c2cfccae4f3f867543ddad"} Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.200076 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf2545b594dba19a7728f5346e08775e7b09f6410c2cfccae4f3f867543ddad" Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.200357 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535326-dqq4l" Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.202663 4809 generic.go:334] "Generic (PLEG): container finished" podID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerID="5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb" exitCode=0 Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.202739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerDied","Data":"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb"} Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.773313 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535320-vd9cj"] Feb 26 15:26:05 crc kubenswrapper[4809]: I0226 15:26:05.784267 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535320-vd9cj"] Feb 26 15:26:06 crc kubenswrapper[4809]: I0226 15:26:06.232070 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerStarted","Data":"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c"} Feb 26 15:26:06 crc kubenswrapper[4809]: I0226 15:26:06.254125 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krv7h" podStartSLOduration=2.811611126 podStartE2EDuration="6.254097882s" podCreationTimestamp="2026-02-26 15:26:00 +0000 UTC" firstStartedPulling="2026-02-26 15:26:02.153397481 +0000 UTC m=+4340.626718004" lastFinishedPulling="2026-02-26 15:26:05.595884237 +0000 UTC m=+4344.069204760" observedRunningTime="2026-02-26 15:26:06.252058544 +0000 UTC m=+4344.725379107" watchObservedRunningTime="2026-02-26 15:26:06.254097882 +0000 UTC m=+4344.727418425" Feb 26 15:26:06 crc kubenswrapper[4809]: I0226 15:26:06.270245 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc20655-012e-4602-8531-b90d457756ef" path="/var/lib/kubelet/pods/8bc20655-012e-4602-8531-b90d457756ef/volumes" Feb 26 15:26:08 crc kubenswrapper[4809]: I0226 15:26:08.060501 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:08 crc kubenswrapper[4809]: I0226 15:26:08.060921 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:09 crc kubenswrapper[4809]: I0226 15:26:09.129148 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r8jpb" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="registry-server" probeResult="failure" output=< Feb 26 15:26:09 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:26:09 crc kubenswrapper[4809]: > Feb 26 15:26:11 crc kubenswrapper[4809]: I0226 15:26:11.283762 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:11 crc kubenswrapper[4809]: I0226 15:26:11.284264 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:11 crc kubenswrapper[4809]: I0226 15:26:11.346601 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:12 crc kubenswrapper[4809]: I0226 15:26:12.373824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:12 crc kubenswrapper[4809]: I0226 15:26:12.435623 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:14 crc kubenswrapper[4809]: I0226 15:26:14.344865 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-krv7h" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="registry-server" containerID="cri-o://0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c" gracePeriod=2 Feb 26 15:26:14 crc kubenswrapper[4809]: I0226 15:26:14.513126 4809 scope.go:117] "RemoveContainer" containerID="922d85b69a824daa64724de71491525b4be8c4ddac2f49c0c7af108c2b0115be" Feb 26 15:26:14 crc kubenswrapper[4809]: I0226 15:26:14.897694 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.001383 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4f74\" (UniqueName: \"kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74\") pod \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.001439 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content\") pod \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.001770 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities\") pod \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\" (UID: \"161fff73-fb1a-4cf3-9ae1-2d437c27508b\") " Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.003136 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities" (OuterVolumeSpecName: "utilities") pod "161fff73-fb1a-4cf3-9ae1-2d437c27508b" (UID: "161fff73-fb1a-4cf3-9ae1-2d437c27508b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.016289 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74" (OuterVolumeSpecName: "kube-api-access-m4f74") pod "161fff73-fb1a-4cf3-9ae1-2d437c27508b" (UID: "161fff73-fb1a-4cf3-9ae1-2d437c27508b"). InnerVolumeSpecName "kube-api-access-m4f74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.029274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "161fff73-fb1a-4cf3-9ae1-2d437c27508b" (UID: "161fff73-fb1a-4cf3-9ae1-2d437c27508b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.105865 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4f74\" (UniqueName: \"kubernetes.io/projected/161fff73-fb1a-4cf3-9ae1-2d437c27508b-kube-api-access-m4f74\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.105925 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.105939 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161fff73-fb1a-4cf3-9ae1-2d437c27508b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.360562 4809 generic.go:334] "Generic (PLEG): container finished" podID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerID="0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c" exitCode=0 Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.360609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerDied","Data":"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c"} Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.360636 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krv7h" event={"ID":"161fff73-fb1a-4cf3-9ae1-2d437c27508b","Type":"ContainerDied","Data":"9ae1f847d32187a7a5c44f673730e529d829aa2cabb9473e669aeb413f728a93"} Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.360656 4809 scope.go:117] "RemoveContainer" containerID="0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.361832 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krv7h" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.397888 4809 scope.go:117] "RemoveContainer" containerID="5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.410240 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.419451 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-krv7h"] Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.424752 4809 scope.go:117] "RemoveContainer" containerID="cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.449353 4809 scope.go:117] "RemoveContainer" containerID="0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c" Feb 26 15:26:15 crc kubenswrapper[4809]: E0226 15:26:15.449909 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c\": container with ID starting with 0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c not found: ID does not exist" containerID="0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.449940 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c"} err="failed to get container status \"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c\": rpc error: code = NotFound desc = could not find container \"0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c\": container with ID starting with 0c2ed22cf49a413f3b4217792a37f33f551c2ff7460fb658748d0506c00b925c not found: ID does not exist" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.449977 4809 scope.go:117] "RemoveContainer" containerID="5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb" Feb 26 15:26:15 crc kubenswrapper[4809]: E0226 15:26:15.450436 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb\": container with ID starting with 5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb not found: ID does not exist" containerID="5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.450478 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb"} err="failed to get container status \"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb\": rpc error: code = NotFound desc = could not find container \"5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb\": container with ID starting with 5a8d7e82e7a4b43ef890744206fb7516736e583c962160cbb38e222ba2f160fb not found: ID does not exist" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.450492 4809 scope.go:117] "RemoveContainer" containerID="cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3" Feb 26 15:26:15 crc kubenswrapper[4809]: E0226 15:26:15.450836 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3\": container with ID starting with cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3 not found: ID does not exist" containerID="cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3" Feb 26 15:26:15 crc kubenswrapper[4809]: I0226 15:26:15.450884 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3"} err="failed to get container status \"cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3\": rpc error: code = NotFound desc = could not find container \"cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3\": container with ID starting with cd9437aa86631e1c01c39814732fb0a2ade1c660954cd61eb5d6e00fc34b66e3 not found: ID does not exist" Feb 26 15:26:16 crc kubenswrapper[4809]: I0226 15:26:16.275459 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" path="/var/lib/kubelet/pods/161fff73-fb1a-4cf3-9ae1-2d437c27508b/volumes" Feb 26 15:26:18 crc kubenswrapper[4809]: I0226 15:26:18.133751 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:18 crc kubenswrapper[4809]: I0226 15:26:18.193895 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:19 crc kubenswrapper[4809]: I0226 15:26:19.718656 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:26:19 crc kubenswrapper[4809]: I0226 15:26:19.721072 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r8jpb" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="registry-server" containerID="cri-o://83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71" gracePeriod=2 Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.298458 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.372790 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fkcj\" (UniqueName: \"kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj\") pod \"f5794423-08dd-4076-b8aa-62955203f9dd\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.372911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities\") pod \"f5794423-08dd-4076-b8aa-62955203f9dd\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.373187 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content\") pod \"f5794423-08dd-4076-b8aa-62955203f9dd\" (UID: \"f5794423-08dd-4076-b8aa-62955203f9dd\") " Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.374156 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities" (OuterVolumeSpecName: "utilities") pod "f5794423-08dd-4076-b8aa-62955203f9dd" (UID: "f5794423-08dd-4076-b8aa-62955203f9dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.374404 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.382154 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj" (OuterVolumeSpecName: "kube-api-access-6fkcj") pod "f5794423-08dd-4076-b8aa-62955203f9dd" (UID: "f5794423-08dd-4076-b8aa-62955203f9dd"). InnerVolumeSpecName "kube-api-access-6fkcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.441788 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5794423-08dd-4076-b8aa-62955203f9dd" (UID: "f5794423-08dd-4076-b8aa-62955203f9dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.445821 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5794423-08dd-4076-b8aa-62955203f9dd" containerID="83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71" exitCode=0 Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.445972 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerDied","Data":"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71"} Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.446139 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8jpb" event={"ID":"f5794423-08dd-4076-b8aa-62955203f9dd","Type":"ContainerDied","Data":"7b6735103794c4fba2613fd15d9a3c22ad80cfe0004e81ca1b443236b884bbb2"} Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.446234 4809 scope.go:117] "RemoveContainer" containerID="83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.446503 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8jpb" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.484413 4809 scope.go:117] "RemoveContainer" containerID="0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.486459 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5794423-08dd-4076-b8aa-62955203f9dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.486485 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fkcj\" (UniqueName: \"kubernetes.io/projected/f5794423-08dd-4076-b8aa-62955203f9dd-kube-api-access-6fkcj\") on node \"crc\" DevicePath \"\"" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.490632 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.503673 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r8jpb"] Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.523377 4809 scope.go:117] "RemoveContainer" containerID="4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.560564 4809 scope.go:117] "RemoveContainer" containerID="83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71" Feb 26 15:26:20 crc kubenswrapper[4809]: E0226 15:26:20.561247 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71\": container with ID starting with 83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71 not found: ID does not exist" containerID="83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.561354 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71"} err="failed to get container status \"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71\": rpc error: code = NotFound desc = could not find container \"83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71\": container with ID starting with 83b9b8717ed6150e35e0cc6f10e6e6932e96576422aea1d898d7b4491a7fdf71 not found: ID does not exist" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.561432 4809 scope.go:117] "RemoveContainer" containerID="0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7" Feb 26 15:26:20 crc kubenswrapper[4809]: E0226 15:26:20.563253 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7\": container with ID starting with 0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7 not found: ID does not exist" containerID="0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.563275 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7"} err="failed to get container status \"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7\": rpc error: code = NotFound desc = could not find container \"0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7\": container with ID starting with 0cb1bba3f801dd132f8162ed542bc846fed01f1fedcf515c1ae7e6f8e2d4caa7 not found: ID does not exist" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.563308 4809 scope.go:117] "RemoveContainer" containerID="4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818" Feb 26 15:26:20 crc kubenswrapper[4809]: E0226 15:26:20.563601 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818\": container with ID starting with 4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818 not found: ID does not exist" containerID="4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818" Feb 26 15:26:20 crc kubenswrapper[4809]: I0226 15:26:20.563633 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818"} err="failed to get container status \"4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818\": rpc error: code = NotFound desc = could not find container \"4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818\": container with ID starting with 4d117ef1ab1d496c23749de44d2262ec134afe649ed45161bb6c1bd54e10b818 not found: ID does not exist" Feb 26 15:26:22 crc kubenswrapper[4809]: I0226 15:26:22.270256 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" path="/var/lib/kubelet/pods/f5794423-08dd-4076-b8aa-62955203f9dd/volumes" Feb 26 15:26:41 crc kubenswrapper[4809]: I0226 15:26:41.794525 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:26:41 crc kubenswrapper[4809]: I0226 15:26:41.795059 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:27:11 crc kubenswrapper[4809]: I0226 15:27:11.794339 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:27:11 crc kubenswrapper[4809]: I0226 15:27:11.794956 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:27:41 crc kubenswrapper[4809]: I0226 15:27:41.793710 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:27:41 crc kubenswrapper[4809]: I0226 15:27:41.794329 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:27:41 crc kubenswrapper[4809]: I0226 15:27:41.794380 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:27:41 crc kubenswrapper[4809]: I0226 15:27:41.795314 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:27:41 crc kubenswrapper[4809]: I0226 15:27:41.795368 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c" gracePeriod=600 Feb 26 15:27:42 crc kubenswrapper[4809]: I0226 15:27:42.455291 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c" exitCode=0 Feb 26 15:27:42 crc kubenswrapper[4809]: I0226 15:27:42.455815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c"} Feb 26 15:27:42 crc kubenswrapper[4809]: I0226 15:27:42.455842 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f"} Feb 26 15:27:42 crc kubenswrapper[4809]: I0226 15:27:42.455858 4809 scope.go:117] "RemoveContainer" containerID="3a06b56f6d07399934d3bc68a111a7274130d693195473ecdd3651ca9c78ba26" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.156188 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535328-vld5h"] Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157548 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157574 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157609 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="extract-content" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157622 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="extract-content" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157654 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="extract-utilities" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157667 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="extract-utilities" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157703 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157714 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157755 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" containerName="oc" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157769 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" containerName="oc" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157800 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="extract-content" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157812 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="extract-content" Feb 26 15:28:00 crc kubenswrapper[4809]: E0226 15:28:00.157840 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="extract-utilities" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.157852 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="extract-utilities" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.158266 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5794423-08dd-4076-b8aa-62955203f9dd" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.158299 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" containerName="oc" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.158332 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="161fff73-fb1a-4cf3-9ae1-2d437c27508b" containerName="registry-server" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.159669 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.161793 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.162115 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.163408 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.169636 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535328-vld5h"] Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.214382 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8wf\" (UniqueName: \"kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf\") pod \"auto-csr-approver-29535328-vld5h\" (UID: \"cb7a652d-07bb-41f2-8dcc-968ab77a092b\") " pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.316455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8wf\" (UniqueName: \"kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf\") pod \"auto-csr-approver-29535328-vld5h\" (UID: \"cb7a652d-07bb-41f2-8dcc-968ab77a092b\") " pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.344903 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8wf\" (UniqueName: \"kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf\") pod \"auto-csr-approver-29535328-vld5h\" (UID: \"cb7a652d-07bb-41f2-8dcc-968ab77a092b\") " pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.495947 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:00 crc kubenswrapper[4809]: I0226 15:28:00.983189 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535328-vld5h"] Feb 26 15:28:01 crc kubenswrapper[4809]: I0226 15:28:01.674451 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535328-vld5h" event={"ID":"cb7a652d-07bb-41f2-8dcc-968ab77a092b","Type":"ContainerStarted","Data":"201b302b1cde7da40eb710e8aa7c5cbaf787723acbfcc362e1a40ae2462204c5"} Feb 26 15:28:03 crc kubenswrapper[4809]: I0226 15:28:03.745971 4809 generic.go:334] "Generic (PLEG): container finished" podID="cb7a652d-07bb-41f2-8dcc-968ab77a092b" containerID="9bae99e11b3f22d0c5d87056b9690a3bf732d2fa9cf7edb7d57f8f0abb17329e" exitCode=0 Feb 26 15:28:03 crc kubenswrapper[4809]: I0226 15:28:03.746068 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535328-vld5h" event={"ID":"cb7a652d-07bb-41f2-8dcc-968ab77a092b","Type":"ContainerDied","Data":"9bae99e11b3f22d0c5d87056b9690a3bf732d2fa9cf7edb7d57f8f0abb17329e"} Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.158361 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.298254 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8wf\" (UniqueName: \"kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf\") pod \"cb7a652d-07bb-41f2-8dcc-968ab77a092b\" (UID: \"cb7a652d-07bb-41f2-8dcc-968ab77a092b\") " Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.303249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf" (OuterVolumeSpecName: "kube-api-access-cr8wf") pod "cb7a652d-07bb-41f2-8dcc-968ab77a092b" (UID: "cb7a652d-07bb-41f2-8dcc-968ab77a092b"). InnerVolumeSpecName "kube-api-access-cr8wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.402083 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8wf\" (UniqueName: \"kubernetes.io/projected/cb7a652d-07bb-41f2-8dcc-968ab77a092b-kube-api-access-cr8wf\") on node \"crc\" DevicePath \"\"" Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.774635 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535328-vld5h" event={"ID":"cb7a652d-07bb-41f2-8dcc-968ab77a092b","Type":"ContainerDied","Data":"201b302b1cde7da40eb710e8aa7c5cbaf787723acbfcc362e1a40ae2462204c5"} Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.774705 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201b302b1cde7da40eb710e8aa7c5cbaf787723acbfcc362e1a40ae2462204c5" Feb 26 15:28:05 crc kubenswrapper[4809]: I0226 15:28:05.774722 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535328-vld5h" Feb 26 15:28:06 crc kubenswrapper[4809]: I0226 15:28:06.245551 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535322-jm5hw"] Feb 26 15:28:06 crc kubenswrapper[4809]: I0226 15:28:06.271941 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535322-jm5hw"] Feb 26 15:28:08 crc kubenswrapper[4809]: I0226 15:28:08.272589 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2891098c-f479-47bc-b960-799778a535c9" path="/var/lib/kubelet/pods/2891098c-f479-47bc-b960-799778a535c9/volumes" Feb 26 15:28:14 crc kubenswrapper[4809]: I0226 15:28:14.720404 4809 scope.go:117] "RemoveContainer" containerID="4ff5d00bd13b0a41877d93a2c645eb862c0321976a61a1519326cdb23c564116" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.153291 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535330-x2kbx"] Feb 26 15:30:00 crc kubenswrapper[4809]: E0226 15:30:00.154259 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7a652d-07bb-41f2-8dcc-968ab77a092b" containerName="oc" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.154272 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7a652d-07bb-41f2-8dcc-968ab77a092b" containerName="oc" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.154490 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7a652d-07bb-41f2-8dcc-968ab77a092b" containerName="oc" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.155337 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.157063 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.157169 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.157726 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.159333 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcxh\" (UniqueName: \"kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh\") pod \"auto-csr-approver-29535330-x2kbx\" (UID: \"131a8634-bb7a-4587-a360-179bf288ac0d\") " pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.178527 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56"] Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.180518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.186168 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.186449 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.202675 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535330-x2kbx"] Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.214745 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56"] Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.260995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.261086 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcxh\" (UniqueName: \"kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh\") pod \"auto-csr-approver-29535330-x2kbx\" (UID: \"131a8634-bb7a-4587-a360-179bf288ac0d\") " pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.261845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.261919 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-826hq\" (UniqueName: \"kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.278951 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcxh\" (UniqueName: \"kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh\") pod \"auto-csr-approver-29535330-x2kbx\" (UID: \"131a8634-bb7a-4587-a360-179bf288ac0d\") " pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.363080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.363463 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-826hq\" (UniqueName: \"kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.363494 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.364434 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.367039 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.379069 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-826hq\" (UniqueName: \"kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq\") pod \"collect-profiles-29535330-tjx56\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.473337 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.500615 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:00 crc kubenswrapper[4809]: I0226 15:30:00.982556 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535330-x2kbx"] Feb 26 15:30:01 crc kubenswrapper[4809]: I0226 15:30:01.113233 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56"] Feb 26 15:30:02 crc kubenswrapper[4809]: I0226 15:30:02.165148 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" event={"ID":"0ccee6b2-3cb1-4726-ad24-230ff0186f5a","Type":"ContainerStarted","Data":"2ecf137019bc8023c9dcd8a71d4e9e7f939d0b0ab9e167e90ad056230b9dadc9"} Feb 26 15:30:02 crc kubenswrapper[4809]: I0226 15:30:02.165820 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" event={"ID":"0ccee6b2-3cb1-4726-ad24-230ff0186f5a","Type":"ContainerStarted","Data":"22dc3aa877f9c616507944208dd8debf22eee9728782fcc927cfe0bf47138a79"} Feb 26 15:30:02 crc kubenswrapper[4809]: I0226 15:30:02.171279 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" event={"ID":"131a8634-bb7a-4587-a360-179bf288ac0d","Type":"ContainerStarted","Data":"8908a48354f9b136bb39330bf1ad875adb9641dd96115a80e7e2e36bd942d776"} Feb 26 15:30:02 crc kubenswrapper[4809]: I0226 15:30:02.192082 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" podStartSLOduration=2.192058757 podStartE2EDuration="2.192058757s" podCreationTimestamp="2026-02-26 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:30:02.184874993 +0000 UTC m=+4580.658195536" watchObservedRunningTime="2026-02-26 15:30:02.192058757 +0000 UTC m=+4580.665379280" Feb 26 15:30:03 crc kubenswrapper[4809]: I0226 15:30:03.184080 4809 generic.go:334] "Generic (PLEG): container finished" podID="0ccee6b2-3cb1-4726-ad24-230ff0186f5a" containerID="2ecf137019bc8023c9dcd8a71d4e9e7f939d0b0ab9e167e90ad056230b9dadc9" exitCode=0 Feb 26 15:30:03 crc kubenswrapper[4809]: I0226 15:30:03.184194 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" event={"ID":"0ccee6b2-3cb1-4726-ad24-230ff0186f5a","Type":"ContainerDied","Data":"2ecf137019bc8023c9dcd8a71d4e9e7f939d0b0ab9e167e90ad056230b9dadc9"} Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.633832 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.689212 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-826hq\" (UniqueName: \"kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq\") pod \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.689261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume\") pod \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.689399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume\") pod \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\" (UID: \"0ccee6b2-3cb1-4726-ad24-230ff0186f5a\") " Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.691200 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ccee6b2-3cb1-4726-ad24-230ff0186f5a" (UID: "0ccee6b2-3cb1-4726-ad24-230ff0186f5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.696249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ccee6b2-3cb1-4726-ad24-230ff0186f5a" (UID: "0ccee6b2-3cb1-4726-ad24-230ff0186f5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.696335 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq" (OuterVolumeSpecName: "kube-api-access-826hq") pod "0ccee6b2-3cb1-4726-ad24-230ff0186f5a" (UID: "0ccee6b2-3cb1-4726-ad24-230ff0186f5a"). InnerVolumeSpecName "kube-api-access-826hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.791826 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.792136 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-826hq\" (UniqueName: \"kubernetes.io/projected/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-kube-api-access-826hq\") on node \"crc\" DevicePath \"\"" Feb 26 15:30:04 crc kubenswrapper[4809]: I0226 15:30:04.792149 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ccee6b2-3cb1-4726-ad24-230ff0186f5a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:30:05 crc kubenswrapper[4809]: I0226 15:30:05.208824 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" event={"ID":"0ccee6b2-3cb1-4726-ad24-230ff0186f5a","Type":"ContainerDied","Data":"22dc3aa877f9c616507944208dd8debf22eee9728782fcc927cfe0bf47138a79"} Feb 26 15:30:05 crc kubenswrapper[4809]: I0226 15:30:05.208861 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22dc3aa877f9c616507944208dd8debf22eee9728782fcc927cfe0bf47138a79" Feb 26 15:30:05 crc kubenswrapper[4809]: I0226 15:30:05.208909 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535330-tjx56" Feb 26 15:30:05 crc kubenswrapper[4809]: I0226 15:30:05.271780 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp"] Feb 26 15:30:05 crc kubenswrapper[4809]: I0226 15:30:05.282087 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535285-87tzp"] Feb 26 15:30:06 crc kubenswrapper[4809]: I0226 15:30:06.223703 4809 generic.go:334] "Generic (PLEG): container finished" podID="131a8634-bb7a-4587-a360-179bf288ac0d" containerID="7fcba2e7112ebceb64dd66db0b1507a5c27c121a60b9ded870946bf28f38f111" exitCode=0 Feb 26 15:30:06 crc kubenswrapper[4809]: I0226 15:30:06.223853 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" event={"ID":"131a8634-bb7a-4587-a360-179bf288ac0d","Type":"ContainerDied","Data":"7fcba2e7112ebceb64dd66db0b1507a5c27c121a60b9ded870946bf28f38f111"} Feb 26 15:30:06 crc kubenswrapper[4809]: I0226 15:30:06.271712 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac336caa-9f34-4637-a9a8-acd1690cfa57" path="/var/lib/kubelet/pods/ac336caa-9f34-4637-a9a8-acd1690cfa57/volumes" Feb 26 15:30:07 crc kubenswrapper[4809]: I0226 15:30:07.663727 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:07 crc kubenswrapper[4809]: I0226 15:30:07.765380 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bcxh\" (UniqueName: \"kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh\") pod \"131a8634-bb7a-4587-a360-179bf288ac0d\" (UID: \"131a8634-bb7a-4587-a360-179bf288ac0d\") " Feb 26 15:30:07 crc kubenswrapper[4809]: I0226 15:30:07.771481 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh" (OuterVolumeSpecName: "kube-api-access-9bcxh") pod "131a8634-bb7a-4587-a360-179bf288ac0d" (UID: "131a8634-bb7a-4587-a360-179bf288ac0d"). InnerVolumeSpecName "kube-api-access-9bcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:30:07 crc kubenswrapper[4809]: I0226 15:30:07.868901 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bcxh\" (UniqueName: \"kubernetes.io/projected/131a8634-bb7a-4587-a360-179bf288ac0d-kube-api-access-9bcxh\") on node \"crc\" DevicePath \"\"" Feb 26 15:30:08 crc kubenswrapper[4809]: I0226 15:30:08.253088 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" event={"ID":"131a8634-bb7a-4587-a360-179bf288ac0d","Type":"ContainerDied","Data":"8908a48354f9b136bb39330bf1ad875adb9641dd96115a80e7e2e36bd942d776"} Feb 26 15:30:08 crc kubenswrapper[4809]: I0226 15:30:08.253148 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8908a48354f9b136bb39330bf1ad875adb9641dd96115a80e7e2e36bd942d776" Feb 26 15:30:08 crc kubenswrapper[4809]: I0226 15:30:08.253602 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535330-x2kbx" Feb 26 15:30:08 crc kubenswrapper[4809]: I0226 15:30:08.738094 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535324-fkljv"] Feb 26 15:30:08 crc kubenswrapper[4809]: I0226 15:30:08.750804 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535324-fkljv"] Feb 26 15:30:10 crc kubenswrapper[4809]: I0226 15:30:10.275905 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488370bb-24aa-4662-be3e-861af2385c7e" path="/var/lib/kubelet/pods/488370bb-24aa-4662-be3e-861af2385c7e/volumes" Feb 26 15:30:11 crc kubenswrapper[4809]: I0226 15:30:11.794470 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:30:11 crc kubenswrapper[4809]: I0226 15:30:11.794746 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:30:14 crc kubenswrapper[4809]: I0226 15:30:14.860993 4809 scope.go:117] "RemoveContainer" containerID="58b43acc38e743212deece42d52c322200f862f022eeebd6ba4a0a6dbd78dbf3" Feb 26 15:30:14 crc kubenswrapper[4809]: I0226 15:30:14.974673 4809 scope.go:117] "RemoveContainer" containerID="41a46e6e938e39f69e8d996cf838ce36e6c2e6a2ddaa73e5c7d1447b52cc37f2" Feb 26 15:30:41 crc kubenswrapper[4809]: I0226 15:30:41.793666 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:30:41 crc kubenswrapper[4809]: I0226 15:30:41.794283 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:31:11 crc kubenswrapper[4809]: I0226 15:31:11.793937 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:31:11 crc kubenswrapper[4809]: I0226 15:31:11.795686 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:31:11 crc kubenswrapper[4809]: I0226 15:31:11.795845 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:31:11 crc kubenswrapper[4809]: I0226 15:31:11.796948 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:31:11 crc kubenswrapper[4809]: I0226 15:31:11.797137 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" gracePeriod=600 Feb 26 15:31:11 crc kubenswrapper[4809]: E0226 15:31:11.925151 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:31:12 crc kubenswrapper[4809]: I0226 15:31:12.301297 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" exitCode=0 Feb 26 15:31:12 crc kubenswrapper[4809]: I0226 15:31:12.301522 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f"} Feb 26 15:31:12 crc kubenswrapper[4809]: I0226 15:31:12.301682 4809 scope.go:117] "RemoveContainer" containerID="92a6aa5db841fa344374557f4cf80fe636520cf2cdfc901874764b3f8d153a9c" Feb 26 15:31:12 crc kubenswrapper[4809]: I0226 15:31:12.302710 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:31:12 crc kubenswrapper[4809]: E0226 15:31:12.302999 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:31:26 crc kubenswrapper[4809]: I0226 15:31:26.257906 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:31:26 crc kubenswrapper[4809]: E0226 15:31:26.259475 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:31:38 crc kubenswrapper[4809]: I0226 15:31:38.257964 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:31:38 crc kubenswrapper[4809]: E0226 15:31:38.259157 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.886191 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:31:44 crc kubenswrapper[4809]: E0226 15:31:44.887421 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ccee6b2-3cb1-4726-ad24-230ff0186f5a" containerName="collect-profiles" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.887436 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ccee6b2-3cb1-4726-ad24-230ff0186f5a" containerName="collect-profiles" Feb 26 15:31:44 crc kubenswrapper[4809]: E0226 15:31:44.887454 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131a8634-bb7a-4587-a360-179bf288ac0d" containerName="oc" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.887462 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="131a8634-bb7a-4587-a360-179bf288ac0d" containerName="oc" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.887771 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ccee6b2-3cb1-4726-ad24-230ff0186f5a" containerName="collect-profiles" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.887824 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a8634-bb7a-4587-a360-179bf288ac0d" containerName="oc" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.889998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.910272 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.941229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.941308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:44 crc kubenswrapper[4809]: I0226 15:31:44.941333 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlq7b\" (UniqueName: \"kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.046705 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.052482 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.052568 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlq7b\" (UniqueName: \"kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.052725 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.053490 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.088192 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlq7b\" (UniqueName: \"kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b\") pod \"redhat-operators-ml9pd\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.212246 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:31:45 crc kubenswrapper[4809]: I0226 15:31:45.757601 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:31:46 crc kubenswrapper[4809]: I0226 15:31:46.782708 4809 generic.go:334] "Generic (PLEG): container finished" podID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerID="a94c226ca74236ceaa6727a4fe95eff3d8f47b739696c985213958e7929ef6b5" exitCode=0 Feb 26 15:31:46 crc kubenswrapper[4809]: I0226 15:31:46.782774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerDied","Data":"a94c226ca74236ceaa6727a4fe95eff3d8f47b739696c985213958e7929ef6b5"} Feb 26 15:31:46 crc kubenswrapper[4809]: I0226 15:31:46.782982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerStarted","Data":"a491cfc9a32367da424eb155f4623cb3226fe9a3e0bd79c3d015dbb922564a27"} Feb 26 15:31:46 crc kubenswrapper[4809]: I0226 15:31:46.785286 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:31:49 crc kubenswrapper[4809]: I0226 15:31:49.827356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerStarted","Data":"9702adebad000aa6702b47f07e3229c5b51cc073ac1404df95f7707c74b1cafd"} Feb 26 15:31:51 crc kubenswrapper[4809]: I0226 15:31:51.258415 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:31:51 crc kubenswrapper[4809]: E0226 15:31:51.259299 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.155414 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535332-zfzfl"] Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.159661 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.164537 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.164561 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.164585 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.168125 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535332-zfzfl"] Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.333435 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t8d\" (UniqueName: \"kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d\") pod \"auto-csr-approver-29535332-zfzfl\" (UID: \"7a2832e0-f129-4f72-bd39-a93b1954818c\") " pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.436221 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2t8d\" (UniqueName: \"kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d\") pod \"auto-csr-approver-29535332-zfzfl\" (UID: \"7a2832e0-f129-4f72-bd39-a93b1954818c\") " pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.471775 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2t8d\" (UniqueName: \"kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d\") pod \"auto-csr-approver-29535332-zfzfl\" (UID: \"7a2832e0-f129-4f72-bd39-a93b1954818c\") " pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:00 crc kubenswrapper[4809]: I0226 15:32:00.489379 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:01 crc kubenswrapper[4809]: I0226 15:32:01.520331 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535332-zfzfl"] Feb 26 15:32:01 crc kubenswrapper[4809]: W0226 15:32:01.529505 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a2832e0_f129_4f72_bd39_a93b1954818c.slice/crio-c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21 WatchSource:0}: Error finding container c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21: Status 404 returned error can't find the container with id c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21 Feb 26 15:32:01 crc kubenswrapper[4809]: I0226 15:32:01.969693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" event={"ID":"7a2832e0-f129-4f72-bd39-a93b1954818c","Type":"ContainerStarted","Data":"c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21"} Feb 26 15:32:02 crc kubenswrapper[4809]: I0226 15:32:02.276131 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:32:02 crc kubenswrapper[4809]: E0226 15:32:02.277616 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:05 crc kubenswrapper[4809]: I0226 15:32:05.029493 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" event={"ID":"7a2832e0-f129-4f72-bd39-a93b1954818c","Type":"ContainerStarted","Data":"1d6007aad4abb0f71130c8fe1825c2cb8b89406ddf0918375b57a7c589ed9a1b"} Feb 26 15:32:06 crc kubenswrapper[4809]: I0226 15:32:06.076737 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" podStartSLOduration=3.9770461900000003 podStartE2EDuration="6.076714975s" podCreationTimestamp="2026-02-26 15:32:00 +0000 UTC" firstStartedPulling="2026-02-26 15:32:01.53307336 +0000 UTC m=+4700.006393893" lastFinishedPulling="2026-02-26 15:32:03.632742155 +0000 UTC m=+4702.106062678" observedRunningTime="2026-02-26 15:32:06.066739362 +0000 UTC m=+4704.540059895" watchObservedRunningTime="2026-02-26 15:32:06.076714975 +0000 UTC m=+4704.550035508" Feb 26 15:32:07 crc kubenswrapper[4809]: I0226 15:32:07.062229 4809 generic.go:334] "Generic (PLEG): container finished" podID="7a2832e0-f129-4f72-bd39-a93b1954818c" containerID="1d6007aad4abb0f71130c8fe1825c2cb8b89406ddf0918375b57a7c589ed9a1b" exitCode=0 Feb 26 15:32:07 crc kubenswrapper[4809]: I0226 15:32:07.062306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" event={"ID":"7a2832e0-f129-4f72-bd39-a93b1954818c","Type":"ContainerDied","Data":"1d6007aad4abb0f71130c8fe1825c2cb8b89406ddf0918375b57a7c589ed9a1b"} Feb 26 15:32:08 crc kubenswrapper[4809]: I0226 15:32:08.630278 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:08 crc kubenswrapper[4809]: I0226 15:32:08.755828 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2t8d\" (UniqueName: \"kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d\") pod \"7a2832e0-f129-4f72-bd39-a93b1954818c\" (UID: \"7a2832e0-f129-4f72-bd39-a93b1954818c\") " Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.085313 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" event={"ID":"7a2832e0-f129-4f72-bd39-a93b1954818c","Type":"ContainerDied","Data":"c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21"} Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.085397 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7849767f611cd1a7ee17d5f08df1b8a22b7b57a7ccfa3a9bbf6b0c4e549ef21" Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.085417 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535332-zfzfl" Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.160530 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535326-dqq4l"] Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.174297 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535326-dqq4l"] Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.438708 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d" (OuterVolumeSpecName: "kube-api-access-s2t8d") pod "7a2832e0-f129-4f72-bd39-a93b1954818c" (UID: "7a2832e0-f129-4f72-bd39-a93b1954818c"). InnerVolumeSpecName "kube-api-access-s2t8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:32:09 crc kubenswrapper[4809]: I0226 15:32:09.473871 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2t8d\" (UniqueName: \"kubernetes.io/projected/7a2832e0-f129-4f72-bd39-a93b1954818c-kube-api-access-s2t8d\") on node \"crc\" DevicePath \"\"" Feb 26 15:32:10 crc kubenswrapper[4809]: I0226 15:32:10.272586 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb48696c-ecbb-46d7-90a0-2fadb5c3b15c" path="/var/lib/kubelet/pods/cb48696c-ecbb-46d7-90a0-2fadb5c3b15c/volumes" Feb 26 15:32:13 crc kubenswrapper[4809]: I0226 15:32:13.146431 4809 generic.go:334] "Generic (PLEG): container finished" podID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerID="9702adebad000aa6702b47f07e3229c5b51cc073ac1404df95f7707c74b1cafd" exitCode=0 Feb 26 15:32:13 crc kubenswrapper[4809]: I0226 15:32:13.146606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerDied","Data":"9702adebad000aa6702b47f07e3229c5b51cc073ac1404df95f7707c74b1cafd"} Feb 26 15:32:14 crc kubenswrapper[4809]: I0226 15:32:14.260807 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:32:14 crc kubenswrapper[4809]: E0226 15:32:14.262550 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:15 crc kubenswrapper[4809]: I0226 15:32:15.098678 4809 scope.go:117] "RemoveContainer" containerID="2d8a0aa80ae630416b5bd50d429f1c91b87ea99c8707eaa42592c9dc6916f570" Feb 26 15:32:15 crc kubenswrapper[4809]: I0226 15:32:15.169183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerStarted","Data":"4f383b1ffbb5e98e22ce6e7118efff554ad5255b5e9691509a1d9993ff4bc210"} Feb 26 15:32:15 crc kubenswrapper[4809]: I0226 15:32:15.204751 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ml9pd" podStartSLOduration=3.609393443 podStartE2EDuration="31.204725821s" podCreationTimestamp="2026-02-26 15:31:44 +0000 UTC" firstStartedPulling="2026-02-26 15:31:46.78499258 +0000 UTC m=+4685.258313103" lastFinishedPulling="2026-02-26 15:32:14.380324918 +0000 UTC m=+4712.853645481" observedRunningTime="2026-02-26 15:32:15.18460565 +0000 UTC m=+4713.657926173" watchObservedRunningTime="2026-02-26 15:32:15.204725821 +0000 UTC m=+4713.678046344" Feb 26 15:32:15 crc kubenswrapper[4809]: I0226 15:32:15.214057 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:15 crc kubenswrapper[4809]: I0226 15:32:15.214106 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:16 crc kubenswrapper[4809]: I0226 15:32:16.261157 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ml9pd" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" probeResult="failure" output=< Feb 26 15:32:16 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:32:16 crc kubenswrapper[4809]: > Feb 26 15:32:26 crc kubenswrapper[4809]: I0226 15:32:26.257405 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:32:26 crc kubenswrapper[4809]: E0226 15:32:26.258528 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:26 crc kubenswrapper[4809]: I0226 15:32:26.280969 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ml9pd" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" probeResult="failure" output=< Feb 26 15:32:26 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:32:26 crc kubenswrapper[4809]: > Feb 26 15:32:36 crc kubenswrapper[4809]: I0226 15:32:36.274631 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ml9pd" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" probeResult="failure" output=< Feb 26 15:32:36 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:32:36 crc kubenswrapper[4809]: > Feb 26 15:32:39 crc kubenswrapper[4809]: I0226 15:32:39.257090 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:32:39 crc kubenswrapper[4809]: E0226 15:32:39.258138 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:46 crc kubenswrapper[4809]: I0226 15:32:46.504217 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ml9pd" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" probeResult="failure" output=< Feb 26 15:32:46 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:32:46 crc kubenswrapper[4809]: > Feb 26 15:32:52 crc kubenswrapper[4809]: I0226 15:32:52.273677 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:32:52 crc kubenswrapper[4809]: E0226 15:32:52.274533 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:32:55 crc kubenswrapper[4809]: I0226 15:32:55.380323 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:55 crc kubenswrapper[4809]: I0226 15:32:55.435656 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:55 crc kubenswrapper[4809]: I0226 15:32:55.623256 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:32:57 crc kubenswrapper[4809]: I0226 15:32:57.317885 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ml9pd" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" containerID="cri-o://4f383b1ffbb5e98e22ce6e7118efff554ad5255b5e9691509a1d9993ff4bc210" gracePeriod=2 Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.331718 4809 generic.go:334] "Generic (PLEG): container finished" podID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerID="4f383b1ffbb5e98e22ce6e7118efff554ad5255b5e9691509a1d9993ff4bc210" exitCode=0 Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.332146 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerDied","Data":"4f383b1ffbb5e98e22ce6e7118efff554ad5255b5e9691509a1d9993ff4bc210"} Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.332180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ml9pd" event={"ID":"18713e3b-0238-490b-b0ba-436f2278b0e9","Type":"ContainerDied","Data":"a491cfc9a32367da424eb155f4623cb3226fe9a3e0bd79c3d015dbb922564a27"} Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.332195 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a491cfc9a32367da424eb155f4623cb3226fe9a3e0bd79c3d015dbb922564a27" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.363607 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.418827 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content\") pod \"18713e3b-0238-490b-b0ba-436f2278b0e9\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.418933 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlq7b\" (UniqueName: \"kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b\") pod \"18713e3b-0238-490b-b0ba-436f2278b0e9\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.420571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities\") pod \"18713e3b-0238-490b-b0ba-436f2278b0e9\" (UID: \"18713e3b-0238-490b-b0ba-436f2278b0e9\") " Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.421704 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities" (OuterVolumeSpecName: "utilities") pod "18713e3b-0238-490b-b0ba-436f2278b0e9" (UID: "18713e3b-0238-490b-b0ba-436f2278b0e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.422737 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.441367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b" (OuterVolumeSpecName: "kube-api-access-zlq7b") pod "18713e3b-0238-490b-b0ba-436f2278b0e9" (UID: "18713e3b-0238-490b-b0ba-436f2278b0e9"). InnerVolumeSpecName "kube-api-access-zlq7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.525251 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlq7b\" (UniqueName: \"kubernetes.io/projected/18713e3b-0238-490b-b0ba-436f2278b0e9-kube-api-access-zlq7b\") on node \"crc\" DevicePath \"\"" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.597609 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18713e3b-0238-490b-b0ba-436f2278b0e9" (UID: "18713e3b-0238-490b-b0ba-436f2278b0e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:32:58 crc kubenswrapper[4809]: I0226 15:32:58.628370 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18713e3b-0238-490b-b0ba-436f2278b0e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:32:59 crc kubenswrapper[4809]: I0226 15:32:59.342811 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ml9pd" Feb 26 15:32:59 crc kubenswrapper[4809]: I0226 15:32:59.392515 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:32:59 crc kubenswrapper[4809]: I0226 15:32:59.405843 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ml9pd"] Feb 26 15:33:00 crc kubenswrapper[4809]: I0226 15:33:00.269776 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" path="/var/lib/kubelet/pods/18713e3b-0238-490b-b0ba-436f2278b0e9/volumes" Feb 26 15:33:03 crc kubenswrapper[4809]: I0226 15:33:03.256691 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:33:03 crc kubenswrapper[4809]: E0226 15:33:03.257433 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:33:16 crc kubenswrapper[4809]: I0226 15:33:16.256700 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:33:16 crc kubenswrapper[4809]: E0226 15:33:16.257704 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:33:28 crc kubenswrapper[4809]: I0226 15:33:28.257729 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:33:28 crc kubenswrapper[4809]: E0226 15:33:28.258779 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:33:40 crc kubenswrapper[4809]: I0226 15:33:40.257211 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:33:40 crc kubenswrapper[4809]: E0226 15:33:40.258049 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:33:54 crc kubenswrapper[4809]: I0226 15:33:54.257424 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:33:54 crc kubenswrapper[4809]: E0226 15:33:54.258404 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.151572 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535334-pzlf6"] Feb 26 15:34:00 crc kubenswrapper[4809]: E0226 15:34:00.152819 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.152841 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" Feb 26 15:34:00 crc kubenswrapper[4809]: E0226 15:34:00.152863 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="extract-content" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.152871 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="extract-content" Feb 26 15:34:00 crc kubenswrapper[4809]: E0226 15:34:00.152895 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2832e0-f129-4f72-bd39-a93b1954818c" containerName="oc" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.152904 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2832e0-f129-4f72-bd39-a93b1954818c" containerName="oc" Feb 26 15:34:00 crc kubenswrapper[4809]: E0226 15:34:00.152917 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="extract-utilities" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.152925 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="extract-utilities" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.153239 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="18713e3b-0238-490b-b0ba-436f2278b0e9" containerName="registry-server" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.153267 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2832e0-f129-4f72-bd39-a93b1954818c" containerName="oc" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.154280 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.156862 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.158810 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.160981 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.164805 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535334-pzlf6"] Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.250550 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6278\" (UniqueName: \"kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278\") pod \"auto-csr-approver-29535334-pzlf6\" (UID: \"a5e92e68-d95a-41dd-8d0e-aa363ade80eb\") " pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.355504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6278\" (UniqueName: \"kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278\") pod \"auto-csr-approver-29535334-pzlf6\" (UID: \"a5e92e68-d95a-41dd-8d0e-aa363ade80eb\") " pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.378092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6278\" (UniqueName: \"kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278\") pod \"auto-csr-approver-29535334-pzlf6\" (UID: \"a5e92e68-d95a-41dd-8d0e-aa363ade80eb\") " pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:00 crc kubenswrapper[4809]: I0226 15:34:00.480048 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:01 crc kubenswrapper[4809]: I0226 15:34:01.061940 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535334-pzlf6"] Feb 26 15:34:01 crc kubenswrapper[4809]: I0226 15:34:01.133771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" event={"ID":"a5e92e68-d95a-41dd-8d0e-aa363ade80eb","Type":"ContainerStarted","Data":"c638cd5ba7cd9159e11ed079cbffcc452b785b879fc897144ec000441a48a3e8"} Feb 26 15:34:03 crc kubenswrapper[4809]: I0226 15:34:03.159240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" event={"ID":"a5e92e68-d95a-41dd-8d0e-aa363ade80eb","Type":"ContainerStarted","Data":"1707e631bbc9c36080eca54db366bf7b3aa605ccc1075aed0429c33b3a812521"} Feb 26 15:34:03 crc kubenswrapper[4809]: I0226 15:34:03.180150 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" podStartSLOduration=2.036251704 podStartE2EDuration="3.180129536s" podCreationTimestamp="2026-02-26 15:34:00 +0000 UTC" firstStartedPulling="2026-02-26 15:34:01.067901414 +0000 UTC m=+4819.541221937" lastFinishedPulling="2026-02-26 15:34:02.211779246 +0000 UTC m=+4820.685099769" observedRunningTime="2026-02-26 15:34:03.173456186 +0000 UTC m=+4821.646776759" watchObservedRunningTime="2026-02-26 15:34:03.180129536 +0000 UTC m=+4821.653450059" Feb 26 15:34:04 crc kubenswrapper[4809]: I0226 15:34:04.170887 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5e92e68-d95a-41dd-8d0e-aa363ade80eb" containerID="1707e631bbc9c36080eca54db366bf7b3aa605ccc1075aed0429c33b3a812521" exitCode=0 Feb 26 15:34:04 crc kubenswrapper[4809]: I0226 15:34:04.171320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" event={"ID":"a5e92e68-d95a-41dd-8d0e-aa363ade80eb","Type":"ContainerDied","Data":"1707e631bbc9c36080eca54db366bf7b3aa605ccc1075aed0429c33b3a812521"} Feb 26 15:34:05 crc kubenswrapper[4809]: I0226 15:34:05.600634 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:05 crc kubenswrapper[4809]: I0226 15:34:05.719485 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6278\" (UniqueName: \"kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278\") pod \"a5e92e68-d95a-41dd-8d0e-aa363ade80eb\" (UID: \"a5e92e68-d95a-41dd-8d0e-aa363ade80eb\") " Feb 26 15:34:05 crc kubenswrapper[4809]: I0226 15:34:05.726293 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278" (OuterVolumeSpecName: "kube-api-access-h6278") pod "a5e92e68-d95a-41dd-8d0e-aa363ade80eb" (UID: "a5e92e68-d95a-41dd-8d0e-aa363ade80eb"). InnerVolumeSpecName "kube-api-access-h6278". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:34:05 crc kubenswrapper[4809]: I0226 15:34:05.824858 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6278\" (UniqueName: \"kubernetes.io/projected/a5e92e68-d95a-41dd-8d0e-aa363ade80eb-kube-api-access-h6278\") on node \"crc\" DevicePath \"\"" Feb 26 15:34:06 crc kubenswrapper[4809]: I0226 15:34:06.194182 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" event={"ID":"a5e92e68-d95a-41dd-8d0e-aa363ade80eb","Type":"ContainerDied","Data":"c638cd5ba7cd9159e11ed079cbffcc452b785b879fc897144ec000441a48a3e8"} Feb 26 15:34:06 crc kubenswrapper[4809]: I0226 15:34:06.194237 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c638cd5ba7cd9159e11ed079cbffcc452b785b879fc897144ec000441a48a3e8" Feb 26 15:34:06 crc kubenswrapper[4809]: I0226 15:34:06.194242 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535334-pzlf6" Feb 26 15:34:06 crc kubenswrapper[4809]: I0226 15:34:06.688765 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535328-vld5h"] Feb 26 15:34:06 crc kubenswrapper[4809]: I0226 15:34:06.700965 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535328-vld5h"] Feb 26 15:34:08 crc kubenswrapper[4809]: I0226 15:34:08.272323 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7a652d-07bb-41f2-8dcc-968ab77a092b" path="/var/lib/kubelet/pods/cb7a652d-07bb-41f2-8dcc-968ab77a092b/volumes" Feb 26 15:34:09 crc kubenswrapper[4809]: I0226 15:34:09.257734 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:34:09 crc kubenswrapper[4809]: E0226 15:34:09.258389 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:34:15 crc kubenswrapper[4809]: I0226 15:34:15.241840 4809 scope.go:117] "RemoveContainer" containerID="9bae99e11b3f22d0c5d87056b9690a3bf732d2fa9cf7edb7d57f8f0abb17329e" Feb 26 15:34:23 crc kubenswrapper[4809]: I0226 15:34:23.256908 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:34:23 crc kubenswrapper[4809]: E0226 15:34:23.257897 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:34:38 crc kubenswrapper[4809]: I0226 15:34:38.258643 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:34:38 crc kubenswrapper[4809]: E0226 15:34:38.259400 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:34:53 crc kubenswrapper[4809]: I0226 15:34:53.257295 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:34:53 crc kubenswrapper[4809]: E0226 15:34:53.258184 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:35:04 crc kubenswrapper[4809]: I0226 15:35:04.263292 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:35:04 crc kubenswrapper[4809]: E0226 15:35:04.264187 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:35:16 crc kubenswrapper[4809]: I0226 15:35:16.258988 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:35:16 crc kubenswrapper[4809]: E0226 15:35:16.260035 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:35:30 crc kubenswrapper[4809]: I0226 15:35:30.257181 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:35:30 crc kubenswrapper[4809]: E0226 15:35:30.258395 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:35:41 crc kubenswrapper[4809]: I0226 15:35:41.258122 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:35:41 crc kubenswrapper[4809]: E0226 15:35:41.258866 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:35:55 crc kubenswrapper[4809]: I0226 15:35:55.257707 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:35:55 crc kubenswrapper[4809]: E0226 15:35:55.259003 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.170028 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535336-dszbd"] Feb 26 15:36:00 crc kubenswrapper[4809]: E0226 15:36:00.170960 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e92e68-d95a-41dd-8d0e-aa363ade80eb" containerName="oc" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.170978 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e92e68-d95a-41dd-8d0e-aa363ade80eb" containerName="oc" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.171276 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e92e68-d95a-41dd-8d0e-aa363ade80eb" containerName="oc" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.172316 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.174805 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.175001 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.175604 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.182327 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535336-dszbd"] Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.284714 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzsdn\" (UniqueName: \"kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn\") pod \"auto-csr-approver-29535336-dszbd\" (UID: \"eadd52d0-ffe0-4f9b-a2b4-6634866384ca\") " pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.387417 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzsdn\" (UniqueName: \"kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn\") pod \"auto-csr-approver-29535336-dszbd\" (UID: \"eadd52d0-ffe0-4f9b-a2b4-6634866384ca\") " pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.408196 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzsdn\" (UniqueName: \"kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn\") pod \"auto-csr-approver-29535336-dszbd\" (UID: \"eadd52d0-ffe0-4f9b-a2b4-6634866384ca\") " pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:00 crc kubenswrapper[4809]: I0226 15:36:00.503323 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:01 crc kubenswrapper[4809]: W0226 15:36:01.047124 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeadd52d0_ffe0_4f9b_a2b4_6634866384ca.slice/crio-0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a WatchSource:0}: Error finding container 0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a: Status 404 returned error can't find the container with id 0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a Feb 26 15:36:01 crc kubenswrapper[4809]: I0226 15:36:01.047882 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535336-dszbd"] Feb 26 15:36:01 crc kubenswrapper[4809]: I0226 15:36:01.638327 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535336-dszbd" event={"ID":"eadd52d0-ffe0-4f9b-a2b4-6634866384ca","Type":"ContainerStarted","Data":"0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a"} Feb 26 15:36:03 crc kubenswrapper[4809]: I0226 15:36:03.851395 4809 generic.go:334] "Generic (PLEG): container finished" podID="eadd52d0-ffe0-4f9b-a2b4-6634866384ca" containerID="091f1dfeb9f8e87f73d982feab73743171e4c6294a02f6ef8abe110d929f5bee" exitCode=0 Feb 26 15:36:03 crc kubenswrapper[4809]: I0226 15:36:03.851453 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535336-dszbd" event={"ID":"eadd52d0-ffe0-4f9b-a2b4-6634866384ca","Type":"ContainerDied","Data":"091f1dfeb9f8e87f73d982feab73743171e4c6294a02f6ef8abe110d929f5bee"} Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.393831 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.493572 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzsdn\" (UniqueName: \"kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn\") pod \"eadd52d0-ffe0-4f9b-a2b4-6634866384ca\" (UID: \"eadd52d0-ffe0-4f9b-a2b4-6634866384ca\") " Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.500727 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn" (OuterVolumeSpecName: "kube-api-access-bzsdn") pod "eadd52d0-ffe0-4f9b-a2b4-6634866384ca" (UID: "eadd52d0-ffe0-4f9b-a2b4-6634866384ca"). InnerVolumeSpecName "kube-api-access-bzsdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.597820 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzsdn\" (UniqueName: \"kubernetes.io/projected/eadd52d0-ffe0-4f9b-a2b4-6634866384ca-kube-api-access-bzsdn\") on node \"crc\" DevicePath \"\"" Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.881925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535336-dszbd" event={"ID":"eadd52d0-ffe0-4f9b-a2b4-6634866384ca","Type":"ContainerDied","Data":"0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a"} Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.881980 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8d703ce9ce04129129ba7c30875ac25b14ed440f23209832ff6f505015c56a" Feb 26 15:36:05 crc kubenswrapper[4809]: I0226 15:36:05.882104 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535336-dszbd" Feb 26 15:36:06 crc kubenswrapper[4809]: I0226 15:36:06.487004 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535330-x2kbx"] Feb 26 15:36:06 crc kubenswrapper[4809]: I0226 15:36:06.502093 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535330-x2kbx"] Feb 26 15:36:08 crc kubenswrapper[4809]: I0226 15:36:08.278308 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131a8634-bb7a-4587-a360-179bf288ac0d" path="/var/lib/kubelet/pods/131a8634-bb7a-4587-a360-179bf288ac0d/volumes" Feb 26 15:36:10 crc kubenswrapper[4809]: I0226 15:36:10.257346 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:36:10 crc kubenswrapper[4809]: E0226 15:36:10.257945 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:36:15 crc kubenswrapper[4809]: I0226 15:36:15.355048 4809 scope.go:117] "RemoveContainer" containerID="7fcba2e7112ebceb64dd66db0b1507a5c27c121a60b9ded870946bf28f38f111" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.842427 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:19 crc kubenswrapper[4809]: E0226 15:36:19.843080 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eadd52d0-ffe0-4f9b-a2b4-6634866384ca" containerName="oc" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.843091 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eadd52d0-ffe0-4f9b-a2b4-6634866384ca" containerName="oc" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.843320 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="eadd52d0-ffe0-4f9b-a2b4-6634866384ca" containerName="oc" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.852195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.858895 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.965106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.965278 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c662n\" (UniqueName: \"kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:19 crc kubenswrapper[4809]: I0226 15:36:19.965370 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.067572 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c662n\" (UniqueName: \"kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.067740 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.067831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.068361 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.068422 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.442979 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c662n\" (UniqueName: \"kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n\") pod \"redhat-marketplace-w7zl5\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.473415 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:20 crc kubenswrapper[4809]: I0226 15:36:20.968739 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:21 crc kubenswrapper[4809]: I0226 15:36:21.052812 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerStarted","Data":"d3390cd9a2926e5991690b4083bb089946e6a00b95341e0b8cb34edee18deaea"} Feb 26 15:36:22 crc kubenswrapper[4809]: I0226 15:36:22.066082 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerID="4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79" exitCode=0 Feb 26 15:36:22 crc kubenswrapper[4809]: I0226 15:36:22.066241 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerDied","Data":"4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79"} Feb 26 15:36:22 crc kubenswrapper[4809]: I0226 15:36:22.267629 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:36:23 crc kubenswrapper[4809]: I0226 15:36:23.080208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189"} Feb 26 15:36:24 crc kubenswrapper[4809]: I0226 15:36:24.095729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerStarted","Data":"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9"} Feb 26 15:36:25 crc kubenswrapper[4809]: I0226 15:36:25.114498 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerID="4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9" exitCode=0 Feb 26 15:36:25 crc kubenswrapper[4809]: I0226 15:36:25.114609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerDied","Data":"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9"} Feb 26 15:36:26 crc kubenswrapper[4809]: I0226 15:36:26.144084 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerStarted","Data":"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df"} Feb 26 15:36:26 crc kubenswrapper[4809]: I0226 15:36:26.178224 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7zl5" podStartSLOduration=3.734028949 podStartE2EDuration="7.178204841s" podCreationTimestamp="2026-02-26 15:36:19 +0000 UTC" firstStartedPulling="2026-02-26 15:36:22.072184397 +0000 UTC m=+4960.545504920" lastFinishedPulling="2026-02-26 15:36:25.516360289 +0000 UTC m=+4963.989680812" observedRunningTime="2026-02-26 15:36:26.162085353 +0000 UTC m=+4964.635405906" watchObservedRunningTime="2026-02-26 15:36:26.178204841 +0000 UTC m=+4964.651525364" Feb 26 15:36:30 crc kubenswrapper[4809]: I0226 15:36:30.474444 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:30 crc kubenswrapper[4809]: I0226 15:36:30.475050 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:30 crc kubenswrapper[4809]: I0226 15:36:30.527954 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:31 crc kubenswrapper[4809]: I0226 15:36:31.995726 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:32 crc kubenswrapper[4809]: I0226 15:36:32.083075 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.220522 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7zl5" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="registry-server" containerID="cri-o://476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df" gracePeriod=2 Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.815894 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.971713 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content\") pod \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.971853 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities\") pod \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.971993 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c662n\" (UniqueName: \"kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n\") pod \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\" (UID: \"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b\") " Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.972829 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities" (OuterVolumeSpecName: "utilities") pod "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" (UID: "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:36:33 crc kubenswrapper[4809]: I0226 15:36:33.977767 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n" (OuterVolumeSpecName: "kube-api-access-c662n") pod "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" (UID: "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b"). InnerVolumeSpecName "kube-api-access-c662n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.070096 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" (UID: "ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.074258 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.074290 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.074301 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c662n\" (UniqueName: \"kubernetes.io/projected/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b-kube-api-access-c662n\") on node \"crc\" DevicePath \"\"" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.236706 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerID="476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df" exitCode=0 Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.236751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerDied","Data":"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df"} Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.236784 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7zl5" event={"ID":"ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b","Type":"ContainerDied","Data":"d3390cd9a2926e5991690b4083bb089946e6a00b95341e0b8cb34edee18deaea"} Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.236804 4809 scope.go:117] "RemoveContainer" containerID="476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.238223 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7zl5" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.258581 4809 scope.go:117] "RemoveContainer" containerID="4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.292916 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.300004 4809 scope.go:117] "RemoveContainer" containerID="4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.306328 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7zl5"] Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.347818 4809 scope.go:117] "RemoveContainer" containerID="476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df" Feb 26 15:36:34 crc kubenswrapper[4809]: E0226 15:36:34.348327 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df\": container with ID starting with 476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df not found: ID does not exist" containerID="476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.348387 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df"} err="failed to get container status \"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df\": rpc error: code = NotFound desc = could not find container \"476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df\": container with ID starting with 476a1582b7c238cde8ef766bcced7f0ccab7b6bf741239e3e7315a14ca1638df not found: ID does not exist" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.348421 4809 scope.go:117] "RemoveContainer" containerID="4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9" Feb 26 15:36:34 crc kubenswrapper[4809]: E0226 15:36:34.349087 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9\": container with ID starting with 4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9 not found: ID does not exist" containerID="4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.349141 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9"} err="failed to get container status \"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9\": rpc error: code = NotFound desc = could not find container \"4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9\": container with ID starting with 4be7285302ba89b99758f7f1f5fc0b13d4f1bdd011c5b32cc65558bb061c8ab9 not found: ID does not exist" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.349177 4809 scope.go:117] "RemoveContainer" containerID="4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79" Feb 26 15:36:34 crc kubenswrapper[4809]: E0226 15:36:34.349485 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79\": container with ID starting with 4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79 not found: ID does not exist" containerID="4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79" Feb 26 15:36:34 crc kubenswrapper[4809]: I0226 15:36:34.349514 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79"} err="failed to get container status \"4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79\": rpc error: code = NotFound desc = could not find container \"4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79\": container with ID starting with 4c560512bbfa89062167241711d98f87f808339e9ef10330f1bfdf9d80966e79 not found: ID does not exist" Feb 26 15:36:36 crc kubenswrapper[4809]: I0226 15:36:36.294293 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" path="/var/lib/kubelet/pods/ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b/volumes" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.174184 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535338-hcpvt"] Feb 26 15:38:00 crc kubenswrapper[4809]: E0226 15:38:00.176667 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="extract-content" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.176790 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="extract-content" Feb 26 15:38:00 crc kubenswrapper[4809]: E0226 15:38:00.176907 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="extract-utilities" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.177002 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="extract-utilities" Feb 26 15:38:00 crc kubenswrapper[4809]: E0226 15:38:00.177212 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="registry-server" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.177329 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="registry-server" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.177894 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab0751b8-e12c-4c85-9f7d-7f8fa24c1e8b" containerName="registry-server" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.179254 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.182848 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.183200 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.183573 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.191998 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535338-hcpvt"] Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.324308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppksd\" (UniqueName: \"kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd\") pod \"auto-csr-approver-29535338-hcpvt\" (UID: \"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2\") " pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.427431 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppksd\" (UniqueName: \"kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd\") pod \"auto-csr-approver-29535338-hcpvt\" (UID: \"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2\") " pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.454373 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppksd\" (UniqueName: \"kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd\") pod \"auto-csr-approver-29535338-hcpvt\" (UID: \"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2\") " pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:00 crc kubenswrapper[4809]: I0226 15:38:00.506905 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:01 crc kubenswrapper[4809]: I0226 15:38:01.046531 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535338-hcpvt"] Feb 26 15:38:01 crc kubenswrapper[4809]: I0226 15:38:01.050007 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:38:01 crc kubenswrapper[4809]: I0226 15:38:01.504629 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" event={"ID":"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2","Type":"ContainerStarted","Data":"4f1bb8292a73bb67f4b3f831ab6aceb638a2d5b3b33611407d8e5924ceb5a4bf"} Feb 26 15:38:02 crc kubenswrapper[4809]: I0226 15:38:02.539461 4809 generic.go:334] "Generic (PLEG): container finished" podID="a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" containerID="e29cdc9862ddab45a93bed5275461c9df103ed6ae8900ebced0f0facda0e47c3" exitCode=0 Feb 26 15:38:02 crc kubenswrapper[4809]: I0226 15:38:02.539543 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" event={"ID":"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2","Type":"ContainerDied","Data":"e29cdc9862ddab45a93bed5275461c9df103ed6ae8900ebced0f0facda0e47c3"} Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.003556 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.126393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppksd\" (UniqueName: \"kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd\") pod \"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2\" (UID: \"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2\") " Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.141323 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd" (OuterVolumeSpecName: "kube-api-access-ppksd") pod "a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" (UID: "a3c1faaf-c2b3-4865-9464-ecfc12cd42c2"). InnerVolumeSpecName "kube-api-access-ppksd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.230027 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppksd\" (UniqueName: \"kubernetes.io/projected/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2-kube-api-access-ppksd\") on node \"crc\" DevicePath \"\"" Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.572002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" event={"ID":"a3c1faaf-c2b3-4865-9464-ecfc12cd42c2","Type":"ContainerDied","Data":"4f1bb8292a73bb67f4b3f831ab6aceb638a2d5b3b33611407d8e5924ceb5a4bf"} Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.572412 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1bb8292a73bb67f4b3f831ab6aceb638a2d5b3b33611407d8e5924ceb5a4bf" Feb 26 15:38:04 crc kubenswrapper[4809]: I0226 15:38:04.572492 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535338-hcpvt" Feb 26 15:38:05 crc kubenswrapper[4809]: I0226 15:38:05.099527 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535332-zfzfl"] Feb 26 15:38:05 crc kubenswrapper[4809]: I0226 15:38:05.109995 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535332-zfzfl"] Feb 26 15:38:06 crc kubenswrapper[4809]: I0226 15:38:06.272429 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2832e0-f129-4f72-bd39-a93b1954818c" path="/var/lib/kubelet/pods/7a2832e0-f129-4f72-bd39-a93b1954818c/volumes" Feb 26 15:38:15 crc kubenswrapper[4809]: I0226 15:38:15.516422 4809 scope.go:117] "RemoveContainer" containerID="4f383b1ffbb5e98e22ce6e7118efff554ad5255b5e9691509a1d9993ff4bc210" Feb 26 15:38:15 crc kubenswrapper[4809]: I0226 15:38:15.550731 4809 scope.go:117] "RemoveContainer" containerID="9702adebad000aa6702b47f07e3229c5b51cc073ac1404df95f7707c74b1cafd" Feb 26 15:38:15 crc kubenswrapper[4809]: I0226 15:38:15.581193 4809 scope.go:117] "RemoveContainer" containerID="a94c226ca74236ceaa6727a4fe95eff3d8f47b739696c985213958e7929ef6b5" Feb 26 15:38:15 crc kubenswrapper[4809]: I0226 15:38:15.639410 4809 scope.go:117] "RemoveContainer" containerID="1d6007aad4abb0f71130c8fe1825c2cb8b89406ddf0918375b57a7c589ed9a1b" Feb 26 15:38:41 crc kubenswrapper[4809]: I0226 15:38:41.793858 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:38:41 crc kubenswrapper[4809]: I0226 15:38:41.794549 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.062695 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 15:38:52 crc kubenswrapper[4809]: E0226 15:38:52.063682 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" containerName="oc" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.063695 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" containerName="oc" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.063940 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" containerName="oc" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.064795 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.070108 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.070501 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.070711 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.070910 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7fd68" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.074206 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.208638 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.208756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.208838 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209344 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209643 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209715 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.209939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8ztj\" (UniqueName: \"kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.312614 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.312711 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.312778 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.312920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.312962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.313077 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.313210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.313268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.313354 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8ztj\" (UniqueName: \"kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.313456 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.314110 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.314297 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.315314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.316210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.319600 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.319740 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.343370 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.347853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8ztj\" (UniqueName: \"kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.363176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.408485 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 15:38:52 crc kubenswrapper[4809]: I0226 15:38:52.973145 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 15:38:53 crc kubenswrapper[4809]: I0226 15:38:53.178555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5dc4d14b-07db-462f-9fb4-8a00eb3452be","Type":"ContainerStarted","Data":"2413499fb3bec5e2b94d915f0ded4d82274bc01d2ee807489caecb7063e2052f"} Feb 26 15:39:11 crc kubenswrapper[4809]: I0226 15:39:11.794196 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:39:11 crc kubenswrapper[4809]: I0226 15:39:11.794795 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:39:34 crc kubenswrapper[4809]: I0226 15:39:34.560675 4809 trace.go:236] Trace[110713185]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-2jjnr" (26-Feb-2026 15:39:33.155) (total time: 1404ms): Feb 26 15:39:34 crc kubenswrapper[4809]: Trace[110713185]: [1.404076106s] [1.404076106s] END Feb 26 15:39:41 crc kubenswrapper[4809]: I0226 15:39:41.793546 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:39:41 crc kubenswrapper[4809]: I0226 15:39:41.794190 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:39:41 crc kubenswrapper[4809]: I0226 15:39:41.794251 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:39:41 crc kubenswrapper[4809]: I0226 15:39:41.795278 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:39:41 crc kubenswrapper[4809]: I0226 15:39:41.795326 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189" gracePeriod=600 Feb 26 15:39:42 crc kubenswrapper[4809]: I0226 15:39:42.878576 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189" exitCode=0 Feb 26 15:39:42 crc kubenswrapper[4809]: I0226 15:39:42.878660 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189"} Feb 26 15:39:42 crc kubenswrapper[4809]: I0226 15:39:42.878913 4809 scope.go:117] "RemoveContainer" containerID="3792691a4063a44709afdc49bc6231ae199e8aabd6a46dda9679c912eba4902f" Feb 26 15:39:43 crc kubenswrapper[4809]: E0226 15:39:43.820605 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 26 15:39:43 crc kubenswrapper[4809]: E0226 15:39:43.821849 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8ztj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5dc4d14b-07db-462f-9fb4-8a00eb3452be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 15:39:43 crc kubenswrapper[4809]: E0226 15:39:43.823168 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" Feb 26 15:39:43 crc kubenswrapper[4809]: E0226 15:39:43.894461 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" Feb 26 15:39:44 crc kubenswrapper[4809]: I0226 15:39:44.903042 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567"} Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.167670 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535340-ljs2x"] Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.169859 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.171853 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.172083 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.172882 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.178399 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535340-ljs2x"] Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.209768 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.298189 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg88c\" (UniqueName: \"kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c\") pod \"auto-csr-approver-29535340-ljs2x\" (UID: \"14e213e4-4cd6-4bba-bfe8-50fec48b508c\") " pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.402188 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg88c\" (UniqueName: \"kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c\") pod \"auto-csr-approver-29535340-ljs2x\" (UID: \"14e213e4-4cd6-4bba-bfe8-50fec48b508c\") " pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.425678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg88c\" (UniqueName: \"kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c\") pod \"auto-csr-approver-29535340-ljs2x\" (UID: \"14e213e4-4cd6-4bba-bfe8-50fec48b508c\") " pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:00 crc kubenswrapper[4809]: I0226 15:40:00.496836 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:01 crc kubenswrapper[4809]: I0226 15:40:01.135291 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535340-ljs2x"] Feb 26 15:40:01 crc kubenswrapper[4809]: W0226 15:40:01.150955 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14e213e4_4cd6_4bba_bfe8_50fec48b508c.slice/crio-b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1 WatchSource:0}: Error finding container b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1: Status 404 returned error can't find the container with id b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1 Feb 26 15:40:02 crc kubenswrapper[4809]: I0226 15:40:02.106862 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" event={"ID":"14e213e4-4cd6-4bba-bfe8-50fec48b508c","Type":"ContainerStarted","Data":"b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1"} Feb 26 15:40:04 crc kubenswrapper[4809]: I0226 15:40:04.140771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" event={"ID":"14e213e4-4cd6-4bba-bfe8-50fec48b508c","Type":"ContainerStarted","Data":"939bead0f98b173455e8bdecd73b548251f0d5e767d7c4abe861288dc297f3ea"} Feb 26 15:40:04 crc kubenswrapper[4809]: I0226 15:40:04.145897 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5dc4d14b-07db-462f-9fb4-8a00eb3452be","Type":"ContainerStarted","Data":"7fa5e4884e447a13c4ab82662cf5732bebc5c003e17fece4b5345b59662c6ffc"} Feb 26 15:40:04 crc kubenswrapper[4809]: I0226 15:40:04.157579 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" podStartSLOduration=2.060625531 podStartE2EDuration="4.15755952s" podCreationTimestamp="2026-02-26 15:40:00 +0000 UTC" firstStartedPulling="2026-02-26 15:40:01.153870202 +0000 UTC m=+5179.627190725" lastFinishedPulling="2026-02-26 15:40:03.250804191 +0000 UTC m=+5181.724124714" observedRunningTime="2026-02-26 15:40:04.155913843 +0000 UTC m=+5182.629234366" watchObservedRunningTime="2026-02-26 15:40:04.15755952 +0000 UTC m=+5182.630880043" Feb 26 15:40:04 crc kubenswrapper[4809]: I0226 15:40:04.186366 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.950159609 podStartE2EDuration="1m14.186347038s" podCreationTimestamp="2026-02-26 15:38:50 +0000 UTC" firstStartedPulling="2026-02-26 15:38:52.970725781 +0000 UTC m=+5111.444046324" lastFinishedPulling="2026-02-26 15:40:00.20691323 +0000 UTC m=+5178.680233753" observedRunningTime="2026-02-26 15:40:04.175210832 +0000 UTC m=+5182.648531355" watchObservedRunningTime="2026-02-26 15:40:04.186347038 +0000 UTC m=+5182.659667561" Feb 26 15:40:05 crc kubenswrapper[4809]: I0226 15:40:05.157717 4809 generic.go:334] "Generic (PLEG): container finished" podID="14e213e4-4cd6-4bba-bfe8-50fec48b508c" containerID="939bead0f98b173455e8bdecd73b548251f0d5e767d7c4abe861288dc297f3ea" exitCode=0 Feb 26 15:40:05 crc kubenswrapper[4809]: I0226 15:40:05.157834 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" event={"ID":"14e213e4-4cd6-4bba-bfe8-50fec48b508c","Type":"ContainerDied","Data":"939bead0f98b173455e8bdecd73b548251f0d5e767d7c4abe861288dc297f3ea"} Feb 26 15:40:06 crc kubenswrapper[4809]: I0226 15:40:06.569618 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:06 crc kubenswrapper[4809]: I0226 15:40:06.677376 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg88c\" (UniqueName: \"kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c\") pod \"14e213e4-4cd6-4bba-bfe8-50fec48b508c\" (UID: \"14e213e4-4cd6-4bba-bfe8-50fec48b508c\") " Feb 26 15:40:06 crc kubenswrapper[4809]: I0226 15:40:06.682855 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c" (OuterVolumeSpecName: "kube-api-access-wg88c") pod "14e213e4-4cd6-4bba-bfe8-50fec48b508c" (UID: "14e213e4-4cd6-4bba-bfe8-50fec48b508c"). InnerVolumeSpecName "kube-api-access-wg88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:40:06 crc kubenswrapper[4809]: I0226 15:40:06.782343 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg88c\" (UniqueName: \"kubernetes.io/projected/14e213e4-4cd6-4bba-bfe8-50fec48b508c-kube-api-access-wg88c\") on node \"crc\" DevicePath \"\"" Feb 26 15:40:07 crc kubenswrapper[4809]: I0226 15:40:07.187126 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" event={"ID":"14e213e4-4cd6-4bba-bfe8-50fec48b508c","Type":"ContainerDied","Data":"b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1"} Feb 26 15:40:07 crc kubenswrapper[4809]: I0226 15:40:07.187460 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b967549394f563bcfad326b266c40e4e8e0e6264937e1bafee7f951abcba7ff1" Feb 26 15:40:07 crc kubenswrapper[4809]: I0226 15:40:07.187343 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535340-ljs2x" Feb 26 15:40:07 crc kubenswrapper[4809]: I0226 15:40:07.231166 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535334-pzlf6"] Feb 26 15:40:07 crc kubenswrapper[4809]: I0226 15:40:07.246937 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535334-pzlf6"] Feb 26 15:40:08 crc kubenswrapper[4809]: I0226 15:40:08.270398 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e92e68-d95a-41dd-8d0e-aa363ade80eb" path="/var/lib/kubelet/pods/a5e92e68-d95a-41dd-8d0e-aa363ade80eb/volumes" Feb 26 15:40:15 crc kubenswrapper[4809]: I0226 15:40:15.840619 4809 scope.go:117] "RemoveContainer" containerID="1707e631bbc9c36080eca54db366bf7b3aa605ccc1075aed0429c33b3a812521" Feb 26 15:42:01 crc kubenswrapper[4809]: I0226 15:42:01.665108 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:01 crc kubenswrapper[4809]: I0226 15:42:01.665107 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:01 crc kubenswrapper[4809]: I0226 15:42:01.742589 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:01 crc kubenswrapper[4809]: I0226 15:42:01.747590 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.450210 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.716401 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535342-lqss6"] Feb 26 15:42:02 crc kubenswrapper[4809]: E0226 15:42:02.725443 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e213e4-4cd6-4bba-bfe8-50fec48b508c" containerName="oc" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.725510 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e213e4-4cd6-4bba-bfe8-50fec48b508c" containerName="oc" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.727768 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e213e4-4cd6-4bba-bfe8-50fec48b508c" containerName="oc" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.741269 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.744487 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.754791 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.770788 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.771987 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.777103 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.810225 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qcg\" (UniqueName: \"kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg\") pod \"auto-csr-approver-29535342-lqss6\" (UID: \"01413d5e-56a4-4b5b-9f12-7baad5eb2c02\") " pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:02 crc kubenswrapper[4809]: I0226 15:42:02.913441 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qcg\" (UniqueName: \"kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg\") pod \"auto-csr-approver-29535342-lqss6\" (UID: \"01413d5e-56a4-4b5b-9f12-7baad5eb2c02\") " pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:03 crc kubenswrapper[4809]: I0226 15:42:03.018833 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qcg\" (UniqueName: \"kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg\") pod \"auto-csr-approver-29535342-lqss6\" (UID: \"01413d5e-56a4-4b5b-9f12-7baad5eb2c02\") " pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:03 crc kubenswrapper[4809]: I0226 15:42:03.137061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:03 crc kubenswrapper[4809]: I0226 15:42:03.611963 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535342-lqss6"] Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.695215 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.695226 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.701696 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.711266 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.850464 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.901325 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.969295 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podUID="957002f1-5ca4-484b-b664-b7b563257915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.990250 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.990310 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.990375 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:05 crc kubenswrapper[4809]: I0226 15:42:05.990478 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.078700 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.078748 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.079131 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.079261 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.177940 4809 patch_prober.go:28] interesting pod/thanos-querier-598477c4d-v2nsv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.178003 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podUID="b837cadb-b512-4a4a-ae50-0b8729bd351a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.601533 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.601596 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.830067 4809 patch_prober.go:28] interesting pod/console-bdb486cc4-gfrth container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.830141 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bdb486cc4-gfrth" podUID="0019f68b-c93e-4130-89e7-3e2d7a471e56" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.947387 4809 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-5m958 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:06 crc kubenswrapper[4809]: I0226 15:42:06.947458 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" podUID="ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.060468 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.060541 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.214548 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lllg8 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.214629 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" podUID="6dde47f1-266b-4f13-978b-26ff224139e9" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.390277 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-d8m5w container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.390909 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" podUID="d1f96f50-c096-4107-9fe1-351bb6b20d57" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.501220 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-nv5fd container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.501302 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" podUID="9a7bcc4d-3a79-4727-bf5e-e96d028fa950" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.619264 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.619437 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.800464 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podUID="3e30fc60-012b-4a56-9cf0-56ff13e835d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:07 crc kubenswrapper[4809]: I0226 15:42:07.800763 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podUID="3e30fc60-012b-4a56-9cf0-56ff13e835d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:08 crc kubenswrapper[4809]: I0226 15:42:08.222457 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:08 crc kubenswrapper[4809]: I0226 15:42:08.222524 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:08 crc kubenswrapper[4809]: I0226 15:42:08.273813 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:08 crc kubenswrapper[4809]: I0226 15:42:08.273897 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.222681 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.222747 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.274086 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.274158 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.979424 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.979453 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.979521 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:09 crc kubenswrapper[4809]: I0226 15:42:09.979556 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:10 crc kubenswrapper[4809]: I0226 15:42:10.206164 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:10 crc kubenswrapper[4809]: I0226 15:42:10.206228 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:10 crc kubenswrapper[4809]: I0226 15:42:10.206247 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:10 crc kubenswrapper[4809]: I0226 15:42:10.206303 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:10 crc kubenswrapper[4809]: I0226 15:42:10.363784 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dcg4s" podUID="0b74b8aa-c615-4cbe-a08f-2781174e2596" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.186253 4809 patch_prober.go:28] interesting pod/thanos-querier-598477c4d-v2nsv container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.85:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.186349 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podUID="b837cadb-b512-4a4a-ae50-0b8729bd351a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.187181 4809 patch_prober.go:28] interesting pod/thanos-querier-598477c4d-v2nsv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.187265 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podUID="b837cadb-b512-4a4a-ae50-0b8729bd351a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.346233 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" podUID="19bdfc76-4c2f-4ef8-890e-84d3a6f5b895" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.660248 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.660418 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.693267 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.693919 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.771688 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.771710 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="a80c453e-f839-4b12-acd5-c0e59ba4b2cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.775354 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.794117 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:42:11 crc kubenswrapper[4809]: I0226 15:42:11.794170 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.375210 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cn6jt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.375319 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cn6jt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.375315 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" podUID="75ed42a0-23bb-4422-bdde-87edffef1c8a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.375398 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cn6jt" podUID="75ed42a0-23bb-4422-bdde-87edffef1c8a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.470287 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" podUID="629e9f19-72e1-497b-a156-51a0ed359d4c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.594207 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" podUID="629e9f19-72e1-497b-a156-51a0ed359d4c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.594250 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.594218 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.676245 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-v7drb" podUID="cd58f297-8233-45a5-8bd4-04621d1e1750" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.676270 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.676462 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-v7drb" podUID="cd58f297-8233-45a5-8bd4-04621d1e1750" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.775175 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.775234 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.775286 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jz4x7" podUID="4ce72366-e1aa-4a1a-ae00-1ff3e592c4df" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.775199 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:12 crc kubenswrapper[4809]: I0226 15:42:12.775340 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:13 crc kubenswrapper[4809]: I0226 15:42:13.222923 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:13 crc kubenswrapper[4809]: I0226 15:42:13.223280 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:13 crc kubenswrapper[4809]: I0226 15:42:13.273958 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:13 crc kubenswrapper[4809]: I0226 15:42:13.274059 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265200 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-kwnwr" podUID="e778875f-43d9-4ab5-9e0c-e561a3d4bd2f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265466 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-qq6nr container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265524 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265590 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-qq6nr container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265604 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.265644 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-kwnwr" podUID="e778875f-43d9-4ab5-9e0c-e561a3d4bd2f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.306154 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-b2x7w container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.306187 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9tgqx container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.306201 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" podUID="a664d458-7627-417c-ad03-5665fe60d20a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.306244 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podUID="bb918b49-7bc0-40e4-b7a7-a4ab671e7911" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.409444 4809 patch_prober.go:28] interesting pod/metrics-server-57b6f675c4-zbdkg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.409535 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" podUID="4b6ea043-8b1b-45ed-8ac8-422d673444f8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.464709 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="0c763bd9-0040-4c8b-996b-e837d320ab67" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.15:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.819334 4809 patch_prober.go:28] interesting pod/controller-manager-5c78f4f7b8-2rr5p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.819334 4809 patch_prober.go:28] interesting pod/controller-manager-5c78f4f7b8-2rr5p container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.821992 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podUID="54f5bb56-f353-4d8d-8a61-f0925dc4c25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.821992 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podUID="54f5bb56-f353-4d8d-8a61-f0925dc4c25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.901164 4809 patch_prober.go:28] interesting pod/monitoring-plugin-7df6d976f7-8dzjt container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.901461 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" podUID="a2ca0fbe-1b6a-489f-909a-589efde40622" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.975909 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.980849 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.976002 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:14 crc kubenswrapper[4809]: I0226 15:42:14.980962 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.019574 4809 patch_prober.go:28] interesting pod/route-controller-manager-5999566584-zlmhw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.019670 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" podUID="81541539-a8ed-415e-aae6-3bb9cb639c08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.019688 4809 patch_prober.go:28] interesting pod/route-controller-manager-5999566584-zlmhw container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.019762 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" podUID="81541539-a8ed-415e-aae6-3bb9cb639c08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.167240 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" podUID="452db9cf-1689-42fa-bd48-15be5d5012e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.249248 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" podUID="a94df460-1916-4302-a528-1850277c2c68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.249287 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" podUID="452db9cf-1689-42fa-bd48-15be5d5012e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.331237 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" podUID="b891860e-25ba-48f0-90f1-a9f481e661eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.331497 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-htlkr" podUID="a94df460-1916-4302-a528-1850277c2c68" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.415254 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-psj8j" podUID="b891860e-25ba-48f0-90f1-a9f481e661eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.415262 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podUID="7a66a093-3f9f-49a8-a45b-84aef0465d4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.661272 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" podUID="369ebb20-08ea-4aa4-ba33-8eecc4a208ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.661535 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podUID="7a66a093-3f9f-49a8-a45b-84aef0465d4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.661635 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5b487ff7-ff62-4570-a75c-314514fb7496" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.172:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.661677 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="5b487ff7-ff62-4570-a75c-314514fb7496" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.172:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.661721 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podUID="0dc90358-78e1-4391-9b04-72fb1a0ffb6e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.702243 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" podUID="e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.761310 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-bmlld" podUID="f06c5375-eeef-461b-9dce-048a10de5770" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.761366 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bmlld" podUID="f06c5375-eeef-461b-9dce-048a10de5770" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.791481 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2mm4b" podUID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:15 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:15 crc kubenswrapper[4809]: > Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.791554 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:15 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:15 crc kubenswrapper[4809]: > Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.791651 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:15 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:15 crc kubenswrapper[4809]: > Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.791763 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-2mm4b" podUID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:15 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:15 crc kubenswrapper[4809]: > Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828299 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podUID="0dc90358-78e1-4391-9b04-72fb1a0ffb6e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828443 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" podUID="ed2539dd-3109-42bf-9c5b-aee680db3b4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828499 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828529 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828580 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828609 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.828663 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podUID="51190e04-2cb1-41e9-9d62-23ef12d0edd3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.910416 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" podUID="f88f4170-586f-4203-8c9b-12aa0865a6be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.910403 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podUID="51190e04-2cb1-41e9-9d62-23ef12d0edd3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:15 crc kubenswrapper[4809]: I0226 15:42:15.992301 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podUID="d05f3883-4b90-4b5d-94b2-b7e916a66ed6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.074312 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" podUID="369ebb20-08ea-4aa4-ba33-8eecc4a208ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.074405 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" podUID="00bdb1ef-c56b-4abe-b491-9c24a8f9089d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.239224 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qpw8j container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.239314 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" podUID="49550fbb-c382-4ee2-9f93-fb53816fb1c7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.239242 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qpw8j container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.239458 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpw8j" podUID="49550fbb-c382-4ee2-9f93-fb53816fb1c7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.280375 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" podUID="ed2539dd-3109-42bf-9c5b-aee680db3b4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.280507 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" podUID="e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.574392 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wbnwh" podUID="f88f4170-586f-4203-8c9b-12aa0865a6be" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738313 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738355 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podUID="d05f3883-4b90-4b5d-94b2-b7e916a66ed6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738407 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738374 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podUID="957002f1-5ca4-484b-b664-b7b563257915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738692 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738691 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738739 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738806 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" podUID="15986ded-5e26-4bcc-bf72-ee349431961a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738859 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" podUID="15986ded-5e26-4bcc-bf72-ee349431961a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738886 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738909 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738902 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" podUID="5ba0d806-2bcd-45f1-b529-36ed243d775b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738964 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podUID="957002f1-5ca4-484b-b664-b7b563257915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738977 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738998 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" podUID="bd634336-09f5-4412-a619-3c59838d89c6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739078 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739114 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739149 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" podUID="6f049af1-526c-496e-a9af-4066b69ed359" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739161 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" podUID="6f049af1-526c-496e-a9af-4066b69ed359" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738904 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" podUID="5ba0d806-2bcd-45f1-b529-36ed243d775b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738859 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-t96dj container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739208 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" podUID="a71f5fc0-296c-47c7-ae8b-63cddaa00c27" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738919 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" podUID="00bdb1ef-c56b-4abe-b491-9c24a8f9089d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738952 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739424 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738852 4809 patch_prober.go:28] interesting pod/thanos-querier-598477c4d-v2nsv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.738999 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739534 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739488 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podUID="b837cadb-b512-4a4a-ae50-0b8729bd351a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739034 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739610 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739043 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739061 4809 patch_prober.go:28] interesting pod/router-default-5444994796-dwhvv container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739697 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739650 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-dwhvv" podUID="420d9fa3-a7e7-4ddf-8f30-70a56496e0e1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739134 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-t96dj container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.739780 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" podUID="a71f5fc0-296c-47c7-ae8b-63cddaa00c27" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.829822 4809 patch_prober.go:28] interesting pod/console-bdb486cc4-gfrth container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.830180 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bdb486cc4-gfrth" podUID="0019f68b-c93e-4130-89e7-3e2d7a471e56" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.948404 4809 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-5m958 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:16 crc kubenswrapper[4809]: I0226 15:42:16.948511 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" podUID="ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.019296 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.215213 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lllg8 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.215357 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" podUID="6dde47f1-266b-4f13-978b-26ff224139e9" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.411587 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-d8m5w container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.411667 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" podUID="d1f96f50-c096-4107-9fe1-351bb6b20d57" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.500617 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-nv5fd container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.500699 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" podUID="9a7bcc4d-3a79-4727-bf5e-e96d028fa950" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.577203 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.577281 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.577287 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.577324 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.577378 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.741000 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.741106 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jz4x7" podUID="4ce72366-e1aa-4a1a-ae00-1ff3e592c4df" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.741919 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.745174 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="a80c453e-f839-4b12-acd5-c0e59ba4b2cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 15:42:17 crc kubenswrapper[4809]: I0226 15:42:17.758507 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podUID="3e30fc60-012b-4a56-9cf0-56ff13e835d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.222431 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.222820 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.222591 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.222976 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.273719 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.273743 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.273780 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.273792 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.411587 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.411692 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="05cda7c6-2dff-46e8-9622-6dda35865e97" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.480303 4809 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.480370 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="19265028-6636-400d-9803-4b7cbcf14758" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.534414 4809 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:18 crc kubenswrapper[4809]: I0226 15:42:18.534469 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="7d913002-7509-40a2-9de5-3efb1c774a56" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:19 crc kubenswrapper[4809]: I0226 15:42:19.974374 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:19 crc kubenswrapper[4809]: I0226 15:42:19.974455 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:19 crc kubenswrapper[4809]: I0226 15:42:19.974474 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:19 crc kubenswrapper[4809]: I0226 15:42:19.974558 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.076886 4809 trace.go:236] Trace[1183668069]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (26-Feb-2026 15:42:12.798) (total time: 7274ms): Feb 26 15:42:20 crc kubenswrapper[4809]: Trace[1183668069]: [7.27440186s] [7.27440186s] END Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.205905 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.206032 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.206045 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.206135 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.380581 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5b487ff7-ff62-4570-a75c-314514fb7496" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.172:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.380896 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="5b487ff7-ff62-4570-a75c-314514fb7496" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.172:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.644322 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.644382 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.644393 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:20 crc kubenswrapper[4809]: I0226 15:42:20.644473 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.172101 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.172187 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.178566 4809 patch_prober.go:28] interesting pod/thanos-querier-598477c4d-v2nsv container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.178684 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-598477c4d-v2nsv" podUID="b837cadb-b512-4a4a-ae50-0b8729bd351a" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.341325 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" podUID="19bdfc76-4c2f-4ef8-890e-84d3a6f5b895" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.620583 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.623569 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.661348 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.661436 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.693790 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.693856 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.764850 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.764942 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.766177 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.769505 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.779534 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"52aaad8286344eedcba7651c772467a86d9bd7f111e60fee7a9044772efef31a"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.845177 4809 patch_prober.go:28] interesting pod/loki-operator-controller-manager-57cd74799f-hkpdq container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.845177 4809 patch_prober.go:28] interesting pod/loki-operator-controller-manager-57cd74799f-hkpdq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.845255 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" podUID="5be7c3b0-feda-4dfd-963c-17813fdc8651" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.845278 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" podUID="5be7c3b0-feda-4dfd-963c-17813fdc8651" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.49:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.964209 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" podUID="bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:21 crc kubenswrapper[4809]: I0226 15:42:21.964211 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-fd648b64f-xrqvp" podUID="bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.550195 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" podUID="629e9f19-72e1-497b-a156-51a0ed359d4c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.552796 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.553498 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpd62" podUID="a0457c9d-5a38-464b-92ca-da334aae1915" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.550399 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-pb8s6" podUID="629e9f19-72e1-497b-a156-51a0ed359d4c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.638874 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-v7drb" podUID="cd58f297-8233-45a5-8bd4-04621d1e1750" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.639711 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-v7drb" podUID="cd58f297-8233-45a5-8bd4-04621d1e1750" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.659156 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"ebe69d1c0838196e90862c2730a0bfe432eacb5086a2b0fe333b9be9c880d7c5"} pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.659226 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" containerID="cri-o://ebe69d1c0838196e90862c2730a0bfe432eacb5086a2b0fe333b9be9c880d7c5" gracePeriod=2 Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.680295 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.740799 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.740947 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.743811 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.743840 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.743889 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.743907 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.744654 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jz4x7" podUID="4ce72366-e1aa-4a1a-ae00-1ff3e592c4df" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.744749 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.757437 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.757571 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 15:42:22 crc kubenswrapper[4809]: I0226 15:42:22.758624 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"76d790f1db18c921022cf16255166546815966fac22cec2d56c9b314d229ae2d"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.155159 4809 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4vxzc container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.155593 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" podUID="3d875f76-8d31-46f5-9fcc-20d2868e7c2f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.155237 4809 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4vxzc container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.155715 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-4vxzc" podUID="3d875f76-8d31-46f5-9fcc-20d2868e7c2f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.223178 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.223242 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.223271 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-ctm8g container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.223346 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-ctm8g" podUID="b1dab503-8599-4066-85b7-86c389ed7748" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.273911 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.273962 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-568bb59667-znjxl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.273983 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.274000 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-568bb59667-znjxl" podUID="1fc6d9b6-52bd-409c-afa9-693fbe42fb7c" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.354377 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.543499 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jz4x7" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630236 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630727 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630363 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630852 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630868 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.630970 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.640671 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"2f9e19bbdea4703579a451ff399b2af13b69ddcd0374ca261dfb05f14ac244d2"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.640749 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" containerID="cri-o://2f9e19bbdea4703579a451ff399b2af13b69ddcd0374ca261dfb05f14ac244d2" gracePeriod=30 Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.739595 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.743616 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="a80c453e-f839-4b12-acd5-c0e59ba4b2cc" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.743708 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.759400 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"6f9457216ee1bd106526d009afe3395c0b8603ee1861cacb9352f53c4f9eed5a"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 26 15:42:23 crc kubenswrapper[4809]: I0226 15:42:23.759513 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a80c453e-f839-4b12-acd5-c0e59ba4b2cc" containerName="ceilometer-central-agent" containerID="cri-o://6f9457216ee1bd106526d009afe3395c0b8603ee1861cacb9352f53c4f9eed5a" gracePeriod=30 Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.039686 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-qq6nr container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.039781 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.039891 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-qq6nr container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.039919 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-qq6nr" podUID="cc062236-67aa-4219-8e13-45ff2cf44f8e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.21:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.124999 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-kwnwr" podUID="e778875f-43d9-4ab5-9e0c-e561a3d4bd2f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.125180 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-kwnwr" podUID="e778875f-43d9-4ab5-9e0c-e561a3d4bd2f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.197810 4809 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-7p66j container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.197898 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7p66j" podUID="b9d62ac5-d483-4086-be8e-e1b7a784701c" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.248172 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-b2x7w container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.248315 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2x7w" podUID="a664d458-7627-417c-ad03-5665fe60d20a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.274717 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.274791 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.316296 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9tgqx container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.316359 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9tgqx container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.316369 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podUID="bb918b49-7bc0-40e4-b7a7-a4ab671e7911" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.316396 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-9tgqx" podUID="bb918b49-7bc0-40e4-b7a7-a4ab671e7911" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.409184 4809 patch_prober.go:28] interesting pod/metrics-server-57b6f675c4-zbdkg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.409311 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" podUID="4b6ea043-8b1b-45ed-8ac8-422d673444f8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.409197 4809 patch_prober.go:28] interesting pod/metrics-server-57b6f675c4-zbdkg container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.409480 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-57b6f675c4-zbdkg" podUID="4b6ea043-8b1b-45ed-8ac8-422d673444f8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.461672 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="0c763bd9-0040-4c8b-996b-e837d320ab67" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.15:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.636276 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.636687 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.691140 4809 generic.go:334] "Generic (PLEG): container finished" podID="5be7c3b0-feda-4dfd-963c-17813fdc8651" containerID="19368411354faeebf3ba3d9b347a1030faa62c8a6b3105e6c5260da7c32a2492" exitCode=1 Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.691214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" event={"ID":"5be7c3b0-feda-4dfd-963c-17813fdc8651","Type":"ContainerDied","Data":"19368411354faeebf3ba3d9b347a1030faa62c8a6b3105e6c5260da7c32a2492"} Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.693231 4809 generic.go:334] "Generic (PLEG): container finished" podID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerID="ebe69d1c0838196e90862c2730a0bfe432eacb5086a2b0fe333b9be9c880d7c5" exitCode=0 Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.693265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" event={"ID":"ba3c9bcd-2859-4815-ba37-d6337eb78ec1","Type":"ContainerDied","Data":"ebe69d1c0838196e90862c2730a0bfe432eacb5086a2b0fe333b9be9c880d7c5"} Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.695987 4809 scope.go:117] "RemoveContainer" containerID="19368411354faeebf3ba3d9b347a1030faa62c8a6b3105e6c5260da7c32a2492" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.786966 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.787052 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.811933 4809 patch_prober.go:28] interesting pod/controller-manager-5c78f4f7b8-2rr5p container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.811982 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podUID="54f5bb56-f353-4d8d-8a61-f0925dc4c25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.812035 4809 patch_prober.go:28] interesting pod/controller-manager-5c78f4f7b8-2rr5p container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.812117 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c78f4f7b8-2rr5p" podUID="54f5bb56-f353-4d8d-8a61-f0925dc4c25d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.78:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.900709 4809 patch_prober.go:28] interesting pod/monitoring-plugin-7df6d976f7-8dzjt container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:24 crc kubenswrapper[4809]: I0226 15:42:24.900793 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7df6d976f7-8dzjt" podUID="a2ca0fbe-1b6a-489f-909a-589efde40622" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041408 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041472 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041777 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-jlgsb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041839 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jlgsb" podUID="02f12e35-0b9a-4af4-ac63-2602bebcb9b0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041922 4809 patch_prober.go:28] interesting pod/route-controller-manager-5999566584-zlmhw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.041940 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" podUID="81541539-a8ed-415e-aae6-3bb9cb639c08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.042076 4809 patch_prober.go:28] interesting pod/route-controller-manager-5999566584-zlmhw container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.042149 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5999566584-zlmhw" podUID="81541539-a8ed-415e-aae6-3bb9cb639c08" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.127259 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-r946b" podUID="452db9cf-1689-42fa-bd48-15be5d5012e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.364364 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-vb9br" podUID="7a66a093-3f9f-49a8-a45b-84aef0465d4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.364447 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7ptff" podUID="0dc90358-78e1-4391-9b04-72fb1a0ffb6e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.446287 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c9sm7" podUID="369ebb20-08ea-4aa4-ba33-8eecc4a208ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.446836 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-wmn76" podUID="51190e04-2cb1-41e9-9d62-23ef12d0edd3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.488340 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qnxhv" podUID="e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.535299 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.535392 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.535384 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tlnl6" podUID="ed2539dd-3109-42bf-9c5b-aee680db3b4f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.535430 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-pdzjj container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.535529 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-pdzjj" podUID="cfdf8e15-0bb8-4200-8b1b-517382e568a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.546751 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.546834 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.565638 4809 trace.go:236] Trace[1800872357]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (26-Feb-2026 15:42:19.864) (total time: 5682ms): Feb 26 15:42:25 crc kubenswrapper[4809]: Trace[1800872357]: [5.682424086s] [5.682424086s] END Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.565645 4809 trace.go:236] Trace[2108681378]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (26-Feb-2026 15:42:21.863) (total time: 3683ms): Feb 26 15:42:25 crc kubenswrapper[4809]: Trace[2108681378]: [3.683787568s] [3.683787568s] END Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.624347 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-llxf9" podUID="d05f3883-4b90-4b5d-94b2-b7e916a66ed6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.665440 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" podUID="00bdb1ef-c56b-4abe-b491-9c24a8f9089d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.754237 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-bmlld" podUID="f06c5375-eeef-461b-9dce-048a10de5770" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.763654 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-bmlld" podUID="f06c5375-eeef-461b-9dce-048a10de5770" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.845061 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-t96dj container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.845168 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" podUID="a71f5fc0-296c-47c7-ae8b-63cddaa00c27" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.857669 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.860160 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.862716 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.862775 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9fd6731a973ac1ef36d3f9a00dcf2810ddc868e361968a526d4227e6f996fcec" exitCode=1 Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.862810 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9fd6731a973ac1ef36d3f9a00dcf2810ddc868e361968a526d4227e6f996fcec"} Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.862850 4809 scope.go:117] "RemoveContainer" containerID="0665c541a2d67aff5c4baf557d27a9a8082d4f83ea5f74d5fa989f94161de42f" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.864737 4809 scope.go:117] "RemoveContainer" containerID="9fd6731a973ac1ef36d3f9a00dcf2810ddc868e361968a526d4227e6f996fcec" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.887237 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-t96dj container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.887301 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t96dj" podUID="a71f5fc0-296c-47c7-ae8b-63cddaa00c27" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:25 crc kubenswrapper[4809]: I0226 15:42:25.887229 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-bk5r6" podUID="bd634336-09f5-4412-a619-3c59838d89c6" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.012561 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.012704 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.055259 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-25wlm" podUID="6f049af1-526c-496e-a9af-4066b69ed359" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.137470 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-f6n9h" podUID="15986ded-5e26-4bcc-bf72-ee349431961a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.137829 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-dhrj6" podUID="5ba0d806-2bcd-45f1-b529-36ed243d775b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.178690 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podUID="957002f1-5ca4-484b-b664-b7b563257915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.178949 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.178729 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179088 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179115 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179158 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179247 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179265 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179415 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179499 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179562 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179584 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179618 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" podUID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179681 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179800 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179934 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.179952 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.182283 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"4059177222f3c679dea14169b798e0eb9c78b5c16851087fd93402118418704b"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.182361 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" containerID="cri-o://4059177222f3c679dea14169b798e0eb9c78b5c16851087fd93402118418704b" gracePeriod=30 Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.183280 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"ebc26281c0d9477b6e6de99bba0bf23da1fad5f7e71ddebc5b5b43b1794d2556"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.183344 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" containerID="cri-o://ebc26281c0d9477b6e6de99bba0bf23da1fad5f7e71ddebc5b5b43b1794d2556" gracePeriod=30 Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.346470 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.396588 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.601700 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.601780 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.601887 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.739610 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.830549 4809 patch_prober.go:28] interesting pod/console-bdb486cc4-gfrth container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.830620 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bdb486cc4-gfrth" podUID="0019f68b-c93e-4130-89e7-3e2d7a471e56" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.830715 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.880120 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.883986 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.947797 4809 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-5m958 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.947869 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" podUID="ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:26 crc kubenswrapper[4809]: I0226 15:42:26.947956 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.102236 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.102470 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" podUID="2b26231b-2e6e-4484-8014-6dcf40d06f40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.102558 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.102646 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.187953 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.201883 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.214000 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-lllg8 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.214346 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" podUID="6dde47f1-266b-4f13-978b-26ff224139e9" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.214519 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.220189 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.220245 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" podUID="957002f1-5ca4-484b-b664-b7b563257915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.220251 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.220316 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.220392 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.360432 4809 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-7p66j container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.390527 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-d8m5w container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.390592 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" podUID="d1f96f50-c096-4107-9fe1-351bb6b20d57" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.390677 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.471282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.471492 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcjxz\" (UniqueName: \"kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.472586 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.501433 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-nv5fd container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.501498 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" podUID="9a7bcc4d-3a79-4727-bf5e-e96d028fa950" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.501587 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.575272 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcjxz\" (UniqueName: \"kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.575718 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.575995 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.581297 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.582512 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.617336 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.618128 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.618169 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.618209 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.618232 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.743943 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jz4x7" podUID="4ce72366-e1aa-4a1a-ae00-1ff3e592c4df" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.744121 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.744434 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-rvqmb" podUID="5863bb93-7ab4-4326-b1fa-e4f1d5d920e2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.802088 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podUID="3e30fc60-012b-4a56-9cf0-56ff13e835d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.802118 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" podUID="3e30fc60-012b-4a56-9cf0-56ff13e835d4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.802381 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.831388 4809 patch_prober.go:28] interesting pod/console-bdb486cc4-gfrth container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.831468 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bdb486cc4-gfrth" podUID="0019f68b-c93e-4130-89e7-3e2d7a471e56" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.887289 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-lllg8" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.888627 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-d8m5w" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.898215 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" event={"ID":"5be7c3b0-feda-4dfd-963c-17813fdc8651","Type":"ContainerStarted","Data":"ff28478c253fb97af9679dd31d6ba5bc3cd2cdcca766a6679997fbf878327538"} Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.898711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.901141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" event={"ID":"ba3c9bcd-2859-4815-ba37-d6337eb78ec1","Type":"ContainerStarted","Data":"809325506e102e00f3c17d2f782c64513a0e54449f8239bbca0a8b575e6bb870"} Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.902570 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" podUID="ba3c9bcd-2859-4815-ba37-d6337eb78ec1" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": dial tcp 10.217.0.96:7472: connect: connection refused" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.948767 4809 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-5m958 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.948829 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" podUID="ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:27 crc kubenswrapper[4809]: I0226 15:42:27.951677 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-nv5fd" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.145243 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" podUID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.288623 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcjxz\" (UniqueName: \"kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz\") pod \"redhat-operators-wj6bq\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.467024 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.543652 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fc9897686-rt5g8" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.546773 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.546837 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.660195 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" podUID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.746394 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-2mm4b" podUID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.755273 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2mm4b" podUID="45178ad4-29b4-4221-ab5f-8d2c6a9a92d2" containerName="registry-server" probeResult="failure" output="command timed out" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.854901 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-2jjnr" podUID="e3b1e666-52f7-42ab-bf72-d47a823ab2fd" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:28 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:28 crc kubenswrapper[4809]: > Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.916783 4809 generic.go:334] "Generic (PLEG): container finished" podID="db50276a-5e85-4edb-9538-0b42201fbe74" containerID="033148a772fde4d4298a372dfca4d125ead7ea075139e271c4e8eed6c166a466" exitCode=1 Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.916868 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" event={"ID":"db50276a-5e85-4edb-9538-0b42201fbe74","Type":"ContainerDied","Data":"033148a772fde4d4298a372dfca4d125ead7ea075139e271c4e8eed6c166a466"} Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.918544 4809 scope.go:117] "RemoveContainer" containerID="033148a772fde4d4298a372dfca4d125ead7ea075139e271c4e8eed6c166a466" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.922125 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.924448 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.924526 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"854ce24c91733d533bed26c34490087aa405f13b2800517bf6ce5ae62b6975b3"} Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.928090 4809 generic.go:334] "Generic (PLEG): container finished" podID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerID="ebc26281c0d9477b6e6de99bba0bf23da1fad5f7e71ddebc5b5b43b1794d2556" exitCode=0 Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.928180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" event={"ID":"fbade11b-78dc-4961-8b28-3d1493bab84c","Type":"ContainerDied","Data":"ebc26281c0d9477b6e6de99bba0bf23da1fad5f7e71ddebc5b5b43b1794d2556"} Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.944326 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc4c2ace-f831-4413-b703-522b24da3a71" containerID="2f9e19bbdea4703579a451ff399b2af13b69ddcd0374ca261dfb05f14ac244d2" exitCode=0 Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.944379 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerDied","Data":"2f9e19bbdea4703579a451ff399b2af13b69ddcd0374ca261dfb05f14ac244d2"} Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.944642 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.955629 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-njhx7" podUID="f52e8302-5dc1-4b5d-b571-29bd5e69f6a6" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:28 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:28 crc kubenswrapper[4809]: > Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.955894 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-njhx7" podUID="f52e8302-5dc1-4b5d-b571-29bd5e69f6a6" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:28 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:28 crc kubenswrapper[4809]: > Feb 26 15:42:28 crc kubenswrapper[4809]: I0226 15:42:28.967585 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-2jjnr" podUID="e3b1e666-52f7-42ab-bf72-d47a823ab2fd" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:28 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:28 crc kubenswrapper[4809]: > Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.576450 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.580585 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.626082 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65dm\" (UniqueName: \"kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.626188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.626612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.728937 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65dm\" (UniqueName: \"kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.729075 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.729192 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.740937 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="d80d3c46-edff-47e9-98e5-357fbc27f114" containerName="prometheus" probeResult="failure" output="command timed out" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.758468 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.762525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.764136 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65dm\" (UniqueName: \"kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm\") pod \"certified-operators-qxdxv\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.965228 4809 generic.go:334] "Generic (PLEG): container finished" podID="19bdfc76-4c2f-4ef8-890e-84d3a6f5b895" containerID="6b6caff2b63580a28efefec471a1d8270f54d2cc88be2785f30ae408caa78df5" exitCode=1 Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.965622 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" event={"ID":"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895","Type":"ContainerDied","Data":"6b6caff2b63580a28efefec471a1d8270f54d2cc88be2785f30ae408caa78df5"} Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.967078 4809 scope.go:117] "RemoveContainer" containerID="6b6caff2b63580a28efefec471a1d8270f54d2cc88be2785f30ae408caa78df5" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.974860 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.974928 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.976649 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.978530 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"5cf1ade45d3ceee6be81edbd0bd8147ab812e281d37d20c963dc66307d8c5067"} pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.978782 4809 patch_prober.go:28] interesting pod/oauth-openshift-55bd48d569-j44gp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.978814 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.978883 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.983811 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" event={"ID":"fbade11b-78dc-4961-8b28-3d1493bab84c","Type":"ContainerStarted","Data":"cd554ad1329cee252584ed1778daa4fb6e755e9caa3410be351f1861297976c9"} Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.985454 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.985553 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 15:42:29 crc kubenswrapper[4809]: I0226 15:42:29.985586 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.004387 4809 generic.go:334] "Generic (PLEG): container finished" podID="ed3d7dc0-026c-4ed5-b816-b0249300c743" containerID="0bec3112cffeb93ad58c6d529d8b57ca2c39acea124c75ebe7328f0754b78bc5" exitCode=1 Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.004493 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" event={"ID":"ed3d7dc0-026c-4ed5-b816-b0249300c743","Type":"ContainerDied","Data":"0bec3112cffeb93ad58c6d529d8b57ca2c39acea124c75ebe7328f0754b78bc5"} Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.005273 4809 scope.go:117] "RemoveContainer" containerID="0bec3112cffeb93ad58c6d529d8b57ca2c39acea124c75ebe7328f0754b78bc5" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.022330 4809 generic.go:334] "Generic (PLEG): container finished" podID="2c068d1c-3f6c-49a3-bf65-d29b68c5ad11" containerID="4d9e98f8d2c15ec59864808b19333469bf19a162a7c747aba69960892ad7748a" exitCode=1 Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.022436 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" event={"ID":"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11","Type":"ContainerDied","Data":"4d9e98f8d2c15ec59864808b19333469bf19a162a7c747aba69960892ad7748a"} Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.023358 4809 scope.go:117] "RemoveContainer" containerID="4d9e98f8d2c15ec59864808b19333469bf19a162a7c747aba69960892ad7748a" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.043891 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.049754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" event={"ID":"db50276a-5e85-4edb-9538-0b42201fbe74","Type":"ContainerStarted","Data":"843dd30030ac45d6065d3c6466195b3e524c4019a4bded32ce3bd22dbd2767a5"} Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.050429 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.064866 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.206721 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.206782 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.206865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.206734 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.207730 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.207858 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.208235 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"13b5bdbf02968f1e31fdec285a37075b5f0919f38ee608801c2cb4be8b4cdb90"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.208279 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" containerID="cri-o://13b5bdbf02968f1e31fdec285a37075b5f0919f38ee608801c2cb4be8b4cdb90" gracePeriod=30 Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.300529 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.736439 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 15:42:30 crc kubenswrapper[4809]: I0226 15:42:30.974073 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.028974 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.070354 4809 generic.go:334] "Generic (PLEG): container finished" podID="2130e114-53fd-4853-bd3a-df26c1c3df4a" containerID="641ffe48b7935d9029d25da4fe245381b29be4fef1c12a6e546e4712cb96de9d" exitCode=1 Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.070445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" event={"ID":"2130e114-53fd-4853-bd3a-df26c1c3df4a","Type":"ContainerDied","Data":"641ffe48b7935d9029d25da4fe245381b29be4fef1c12a6e546e4712cb96de9d"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.070874 4809 scope.go:117] "RemoveContainer" containerID="641ffe48b7935d9029d25da4fe245381b29be4fef1c12a6e546e4712cb96de9d" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.077560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" event={"ID":"bc4c2ace-f831-4413-b703-522b24da3a71","Type":"ContainerStarted","Data":"f2e62434f619c78e05a79a8871b28e7273c3f45907bac3b1506e8d7ee513bcb4"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.078584 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.080392 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" event={"ID":"2c068d1c-3f6c-49a3-bf65-d29b68c5ad11","Type":"ContainerStarted","Data":"81e1c80cadb0ef6ded7584f2df1dbb95d1a221bf3b2ed0ad0df576a9628e8deb"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.081677 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.089107 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" event={"ID":"19bdfc76-4c2f-4ef8-890e-84d3a6f5b895","Type":"ContainerStarted","Data":"befed55cbc024f97a5f29bc574d6173a3bb3fd75bdca838b053d9d9e161f70c4"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.089148 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.091317 4809 generic.go:334] "Generic (PLEG): container finished" podID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerID="4059177222f3c679dea14169b798e0eb9c78b5c16851087fd93402118418704b" exitCode=0 Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.091377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" event={"ID":"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4","Type":"ContainerDied","Data":"4059177222f3c679dea14169b798e0eb9c78b5c16851087fd93402118418704b"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.091401 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" event={"ID":"1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4","Type":"ContainerStarted","Data":"86bb756a98df2c8ae9d4f53f7bb69a108d0774f0c643eda680e66716cf249da4"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.092721 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.092799 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.092829 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.109340 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" event={"ID":"ed3d7dc0-026c-4ed5-b816-b0249300c743","Type":"ContainerStarted","Data":"22fba9072adbc09f5dda2c95c7a77a9c0a6fc4cf0b0d29bd96fa02122bf24b04"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.110461 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.118536 4809 generic.go:334] "Generic (PLEG): container finished" podID="1aebc8ba-eb1d-49a1-843b-3634bbbd4556" containerID="ed24fbebfca2fb1ced0f78f98883390211fe10348127689b678c24e05c7395a8" exitCode=1 Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.118596 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" event={"ID":"1aebc8ba-eb1d-49a1-843b-3634bbbd4556","Type":"ContainerDied","Data":"ed24fbebfca2fb1ced0f78f98883390211fe10348127689b678c24e05c7395a8"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.119414 4809 scope.go:117] "RemoveContainer" containerID="ed24fbebfca2fb1ced0f78f98883390211fe10348127689b678c24e05c7395a8" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.123681 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerID="13b5bdbf02968f1e31fdec285a37075b5f0919f38ee608801c2cb4be8b4cdb90" exitCode=0 Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.123756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" event={"ID":"9a937049-c4e1-499a-b3eb-6622e14cf7f5","Type":"ContainerDied","Data":"13b5bdbf02968f1e31fdec285a37075b5f0919f38ee608801c2cb4be8b4cdb90"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.126106 4809 generic.go:334] "Generic (PLEG): container finished" podID="00bdb1ef-c56b-4abe-b491-9c24a8f9089d" containerID="ccdc94939c027b111d257e9982b36a511200daaafe942327e02648a255181f8f" exitCode=1 Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.127120 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" event={"ID":"00bdb1ef-c56b-4abe-b491-9c24a8f9089d","Type":"ContainerDied","Data":"ccdc94939c027b111d257e9982b36a511200daaafe942327e02648a255181f8f"} Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.128201 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.128239 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.128701 4809 scope.go:117] "RemoveContainer" containerID="ccdc94939c027b111d257e9982b36a511200daaafe942327e02648a255181f8f" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.180212 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.611292 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.818415 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.818811 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.818891 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 15:42:31 crc kubenswrapper[4809]: I0226 15:42:31.992649 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b25b5c98-b424-41ce-b099-876b266cf2be" containerName="galera" containerID="cri-o://52aaad8286344eedcba7651c772467a86d9bd7f111e60fee7a9044772efef31a" gracePeriod=20 Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.004482 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" containerID="cri-o://76d790f1db18c921022cf16255166546815966fac22cec2d56c9b314d229ae2d" gracePeriod=21 Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.026299 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.142660 4809 generic.go:334] "Generic (PLEG): container finished" podID="a80c453e-f839-4b12-acd5-c0e59ba4b2cc" containerID="6f9457216ee1bd106526d009afe3395c0b8603ee1861cacb9352f53c4f9eed5a" exitCode=0 Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.142750 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerDied","Data":"6f9457216ee1bd106526d009afe3395c0b8603ee1861cacb9352f53c4f9eed5a"} Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.146390 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" event={"ID":"9a937049-c4e1-499a-b3eb-6622e14cf7f5","Type":"ContainerStarted","Data":"eca923ce678ee92054957b13b3325771d7dafb5f4befa4f3df389b2c7196224f"} Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.146843 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.146919 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.146954 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.150440 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" event={"ID":"00bdb1ef-c56b-4abe-b491-9c24a8f9089d","Type":"ContainerStarted","Data":"61e9f416fc05764c0b380c43b4f545e6e6183bfe9dada3565affb6eb10989f61"} Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.150707 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.153754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" event={"ID":"2130e114-53fd-4853-bd3a-df26c1c3df4a","Type":"ContainerStarted","Data":"f32dfc3dcdf44e2e2a31803912e585f828b82c876c59be9512012757fce4b752"} Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.153994 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.156653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7pkzl" event={"ID":"1aebc8ba-eb1d-49a1-843b-3634bbbd4556","Type":"ContainerStarted","Data":"55bc2d57f6004c25934415927b384997c2524956cd95ab60513af12ba2e050c3"} Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.159177 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.159406 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.159183 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 15:42:32 crc kubenswrapper[4809]: I0226 15:42:32.159570 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.203274 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-hdkn4 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.203910 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" podUID="9a937049-c4e1-499a-b3eb-6622e14cf7f5" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.203997 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.204038 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.204036 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.204092 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.801136 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535342-lqss6"] Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.817973 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.832167 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:42:33 crc kubenswrapper[4809]: W0226 15:42:33.851332 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae WatchSource:0}: Error finding container 01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae: Status 404 returned error can't find the container with id 01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae Feb 26 15:42:33 crc kubenswrapper[4809]: W0226 15:42:33.880355 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ea2153_1aae_4065_94f2_138ffbfa4cf8.slice/crio-98b5a4629dca8914e3c7e6b89fbdeb91b9c8467826b351447ab15020bed6e73e WatchSource:0}: Error finding container 98b5a4629dca8914e3c7e6b89fbdeb91b9c8467826b351447ab15020bed6e73e: Status 404 returned error can't find the container with id 98b5a4629dca8914e3c7e6b89fbdeb91b9c8467826b351447ab15020bed6e73e Feb 26 15:42:33 crc kubenswrapper[4809]: W0226 15:42:33.909745 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fb2b260_67e9_4ef5_ad49_b5326ce991b7.slice/crio-7b413e8906c329c9ba9fdd628ed246ce8a602ef451d58f23e24720765e6ae1de WatchSource:0}: Error finding container 7b413e8906c329c9ba9fdd628ed246ce8a602ef451d58f23e24720765e6ae1de: Status 404 returned error can't find the container with id 7b413e8906c329c9ba9fdd628ed246ce8a602ef451d58f23e24720765e6ae1de Feb 26 15:42:33 crc kubenswrapper[4809]: I0226 15:42:33.968138 4809 trace.go:236] Trace[80368094]: "Calculate volume metrics of storage for pod minio-dev/minio" (26-Feb-2026 15:42:32.682) (total time: 1272ms): Feb 26 15:42:33 crc kubenswrapper[4809]: Trace[80368094]: [1.272108197s] [1.272108197s] END Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.085421 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.216066 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a80c453e-f839-4b12-acd5-c0e59ba4b2cc","Type":"ContainerStarted","Data":"2d9ad7abc7788bc704246fd0151a409d73a214fff7578c9f69173e570b417a81"} Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.217970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerStarted","Data":"7b413e8906c329c9ba9fdd628ed246ce8a602ef451d58f23e24720765e6ae1de"} Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.219815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535342-lqss6" event={"ID":"01413d5e-56a4-4b5b-9f12-7baad5eb2c02","Type":"ContainerStarted","Data":"01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae"} Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.222560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerStarted","Data":"98b5a4629dca8914e3c7e6b89fbdeb91b9c8467826b351447ab15020bed6e73e"} Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.546576 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.546840 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.546610 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4kbp5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.546905 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" podUID="bc4c2ace-f831-4413-b703-522b24da3a71" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.790782 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" podUID="db50276a-5e85-4edb-9538-0b42201fbe74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": dial tcp 10.217.0.118:8081: connect: connection refused" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.808624 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57dc789b66-zjvhb" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.989994 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.989994 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnldf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.990273 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 15:42:34 crc kubenswrapper[4809]: I0226 15:42:34.990329 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" podUID="1e0e52d6-d1c2-420a-bd6a-1fb42ca824a4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.078793 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.078852 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qt4lx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.078872 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.078886 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" podUID="fbade11b-78dc-4961-8b28-3d1493bab84c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.139664 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-b24kw" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.245613 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerStarted","Data":"2671cf5ef19ef5504dbeafe162200cd1eb172e08c3b0b0aa713b845b7e7fb85f"} Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.248143 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerStarted","Data":"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7"} Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.604292 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.838548 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bdb486cc4-gfrth" Feb 26 15:42:35 crc kubenswrapper[4809]: I0226 15:42:35.954383 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-5m958" Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.330562 4809 generic.go:334] "Generic (PLEG): container finished" podID="b25b5c98-b424-41ce-b099-876b266cf2be" containerID="52aaad8286344eedcba7651c772467a86d9bd7f111e60fee7a9044772efef31a" exitCode=0 Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.330653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerDied","Data":"52aaad8286344eedcba7651c772467a86d9bd7f111e60fee7a9044772efef31a"} Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.337629 4809 generic.go:334] "Generic (PLEG): container finished" podID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerID="2671cf5ef19ef5504dbeafe162200cd1eb172e08c3b0b0aa713b845b7e7fb85f" exitCode=0 Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.337707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerDied","Data":"2671cf5ef19ef5504dbeafe162200cd1eb172e08c3b0b0aa713b845b7e7fb85f"} Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.357316 4809 generic.go:334] "Generic (PLEG): container finished" podID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerID="5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7" exitCode=0 Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.357373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerDied","Data":"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7"} Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.444461 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerID="76d790f1db18c921022cf16255166546815966fac22cec2d56c9b314d229ae2d" exitCode=0 Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.444802 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerDied","Data":"76d790f1db18c921022cf16255166546815966fac22cec2d56c9b314d229ae2d"} Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.444839 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4f21dca-3b2f-4818-8356-1de8cfbbc261","Type":"ContainerStarted","Data":"93b4d8e6d82f03897d797a93753ffff478cb85757aaf012f1921f5d8d290a069"} Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.545349 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6" Feb 26 15:42:36 crc kubenswrapper[4809]: I0226 15:42:36.768942 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:42:37 crc kubenswrapper[4809]: I0226 15:42:37.458844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b25b5c98-b424-41ce-b099-876b266cf2be","Type":"ContainerStarted","Data":"1ebfb099d4a78968d9b58ec8dabd459e6909ec5d3414f2840e9ab70e8bcde7fa"} Feb 26 15:42:37 crc kubenswrapper[4809]: I0226 15:42:37.463208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535342-lqss6" event={"ID":"01413d5e-56a4-4b5b-9f12-7baad5eb2c02","Type":"ContainerStarted","Data":"2d1950704410befa5899bbf1b31ae759cca28679bcedc8061b93cfb4baa06572"} Feb 26 15:42:37 crc kubenswrapper[4809]: I0226 15:42:37.526838 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535342-lqss6" podStartSLOduration=35.3726242 podStartE2EDuration="36.526814609s" podCreationTimestamp="2026-02-26 15:42:01 +0000 UTC" firstStartedPulling="2026-02-26 15:42:33.906762131 +0000 UTC m=+5332.380082654" lastFinishedPulling="2026-02-26 15:42:35.06095255 +0000 UTC m=+5333.534273063" observedRunningTime="2026-02-26 15:42:37.518607446 +0000 UTC m=+5335.991927969" watchObservedRunningTime="2026-02-26 15:42:37.526814609 +0000 UTC m=+5336.000135132" Feb 26 15:42:37 crc kubenswrapper[4809]: I0226 15:42:37.551987 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4kbp5" Feb 26 15:42:38 crc kubenswrapper[4809]: I0226 15:42:38.477079 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerStarted","Data":"7272da7e61e70399e54596b82204db2354a7cbcf550e96d04bbe3936dbab3c6e"} Feb 26 15:42:38 crc kubenswrapper[4809]: I0226 15:42:38.479111 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerStarted","Data":"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555"} Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.047001 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.210189 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-hdkn4" Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.493177 4809 generic.go:334] "Generic (PLEG): container finished" podID="01413d5e-56a4-4b5b-9f12-7baad5eb2c02" containerID="2d1950704410befa5899bbf1b31ae759cca28679bcedc8061b93cfb4baa06572" exitCode=0 Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.493270 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535342-lqss6" event={"ID":"01413d5e-56a4-4b5b-9f12-7baad5eb2c02","Type":"ContainerDied","Data":"2d1950704410befa5899bbf1b31ae759cca28679bcedc8061b93cfb4baa06572"} Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.761910 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.762065 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.764548 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"a5e520b7702ac3a5d029935e6a2103723ebb5981eb933900e42afba484122102"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 26 15:42:39 crc kubenswrapper[4809]: I0226 15:42:39.764637 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" containerID="cri-o://a5e520b7702ac3a5d029935e6a2103723ebb5981eb933900e42afba484122102" gracePeriod=30 Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.389995 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.390733 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.530867 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.552239 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.552371 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.581238 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6fc554dcbc-rqcfb" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.671251 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.671426 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqvh\" (UniqueName: \"kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.671449 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.768717 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-57cd74799f-hkpdq" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.774451 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.774617 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqvh\" (UniqueName: \"kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.774649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.776139 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.776234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.813214 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqvh\" (UniqueName: \"kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh\") pod \"community-operators-zk4jc\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:40 crc kubenswrapper[4809]: I0226 15:42:40.896867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.520286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535342-lqss6" event={"ID":"01413d5e-56a4-4b5b-9f12-7baad5eb2c02","Type":"ContainerDied","Data":"01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae"} Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.526279 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.535427 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.602142 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4qcg\" (UniqueName: \"kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg\") pod \"01413d5e-56a4-4b5b-9f12-7baad5eb2c02\" (UID: \"01413d5e-56a4-4b5b-9f12-7baad5eb2c02\") " Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.653574 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.651899 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg" (OuterVolumeSpecName: "kube-api-access-j4qcg") pod "01413d5e-56a4-4b5b-9f12-7baad5eb2c02" (UID: "01413d5e-56a4-4b5b-9f12-7baad5eb2c02"). InnerVolumeSpecName "kube-api-access-j4qcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.707374 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4qcg\" (UniqueName: \"kubernetes.io/projected/01413d5e-56a4-4b5b-9f12-7baad5eb2c02-kube-api-access-j4qcg\") on node \"crc\" DevicePath \"\"" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.760640 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.760674 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.794756 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.860753 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.860838 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 15:42:41 crc kubenswrapper[4809]: I0226 15:42:41.862723 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 15:42:42 crc kubenswrapper[4809]: I0226 15:42:42.553145 4809 generic.go:334] "Generic (PLEG): container finished" podID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerID="d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555" exitCode=0 Feb 26 15:42:42 crc kubenswrapper[4809]: I0226 15:42:42.553490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerDied","Data":"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555"} Feb 26 15:42:42 crc kubenswrapper[4809]: I0226 15:42:42.586298 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535342-lqss6" Feb 26 15:42:42 crc kubenswrapper[4809]: I0226 15:42:42.588122 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerStarted","Data":"3e01d3237030096700853e18076b33ac61097ecc5047ce1a55fe7302c6be5ffa"} Feb 26 15:42:42 crc kubenswrapper[4809]: I0226 15:42:42.588162 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerStarted","Data":"c25a387635ff03d49c810751866031df32c47760cc2c86fea61fa47f895b0f36"} Feb 26 15:42:43 crc kubenswrapper[4809]: I0226 15:42:43.602101 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerID="3e01d3237030096700853e18076b33ac61097ecc5047ce1a55fe7302c6be5ffa" exitCode=0 Feb 26 15:42:43 crc kubenswrapper[4809]: I0226 15:42:43.602141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerDied","Data":"3e01d3237030096700853e18076b33ac61097ecc5047ce1a55fe7302c6be5ffa"} Feb 26 15:42:44 crc kubenswrapper[4809]: I0226 15:42:44.616257 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jsjcz" Feb 26 15:42:44 crc kubenswrapper[4809]: I0226 15:42:44.861264 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-w2zff" Feb 26 15:42:45 crc kubenswrapper[4809]: I0226 15:42:45.003469 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnldf" Feb 26 15:42:45 crc kubenswrapper[4809]: I0226 15:42:45.009891 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xcrth" Feb 26 15:42:45 crc kubenswrapper[4809]: I0226 15:42:45.082642 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qt4lx" Feb 26 15:42:45 crc kubenswrapper[4809]: I0226 15:42:45.983934 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mvll2" Feb 26 15:42:46 crc kubenswrapper[4809]: I0226 15:42:46.652290 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerStarted","Data":"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd"} Feb 26 15:42:46 crc kubenswrapper[4809]: I0226 15:42:46.656213 4809 generic.go:334] "Generic (PLEG): container finished" podID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerID="a5e520b7702ac3a5d029935e6a2103723ebb5981eb933900e42afba484122102" exitCode=0 Feb 26 15:42:46 crc kubenswrapper[4809]: I0226 15:42:46.656291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d","Type":"ContainerDied","Data":"a5e520b7702ac3a5d029935e6a2103723ebb5981eb933900e42afba484122102"} Feb 26 15:42:46 crc kubenswrapper[4809]: I0226 15:42:46.660075 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerStarted","Data":"70d435a0b13e540be45d98cb4104720cb6779ea24b59935b87e0f57068845ec0"} Feb 26 15:42:46 crc kubenswrapper[4809]: E0226 15:42:46.675118 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:42:47 crc kubenswrapper[4809]: E0226 15:42:47.515713 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:42:47 crc kubenswrapper[4809]: I0226 15:42:47.718363 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxdxv" podStartSLOduration=11.977684788 podStartE2EDuration="21.718343349s" podCreationTimestamp="2026-02-26 15:42:26 +0000 UTC" firstStartedPulling="2026-02-26 15:42:36.358903822 +0000 UTC m=+5334.832224345" lastFinishedPulling="2026-02-26 15:42:46.099562383 +0000 UTC m=+5344.572882906" observedRunningTime="2026-02-26 15:42:47.713052149 +0000 UTC m=+5346.186372702" watchObservedRunningTime="2026-02-26 15:42:47.718343349 +0000 UTC m=+5346.191663892" Feb 26 15:42:48 crc kubenswrapper[4809]: E0226 15:42:48.111974 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:42:48 crc kubenswrapper[4809]: E0226 15:42:48.112215 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:42:50 crc kubenswrapper[4809]: I0226 15:42:50.048809 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:50 crc kubenswrapper[4809]: I0226 15:42:50.049191 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.196078 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxdxv" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:51 crc kubenswrapper[4809]: > Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.720976 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74917c3f-f22d-43b0-9fbf-6473cb9c6c9d","Type":"ContainerStarted","Data":"41bff5670ddee8ab89e3910bafc1a92b91aa785366d826e519fb5eca6ab07413"} Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.723978 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerID="70d435a0b13e540be45d98cb4104720cb6779ea24b59935b87e0f57068845ec0" exitCode=0 Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.724069 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerDied","Data":"70d435a0b13e540be45d98cb4104720cb6779ea24b59935b87e0f57068845ec0"} Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.726740 4809 generic.go:334] "Generic (PLEG): container finished" podID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerID="7272da7e61e70399e54596b82204db2354a7cbcf550e96d04bbe3936dbab3c6e" exitCode=0 Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.726787 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerDied","Data":"7272da7e61e70399e54596b82204db2354a7cbcf550e96d04bbe3936dbab3c6e"} Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.818778 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.818855 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.818922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.825811 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"854ce24c91733d533bed26c34490087aa405f13b2800517bf6ce5ae62b6975b3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 26 15:42:51 crc kubenswrapper[4809]: I0226 15:42:51.825942 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://854ce24c91733d533bed26c34490087aa405f13b2800517bf6ce5ae62b6975b3" gracePeriod=30 Feb 26 15:42:53 crc kubenswrapper[4809]: I0226 15:42:53.755638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerStarted","Data":"d3af94d72fe0c2dbdf68f6e87546c0db7982307a9294ea949c9a049a8ecd4b7a"} Feb 26 15:42:53 crc kubenswrapper[4809]: I0226 15:42:53.772893 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerStarted","Data":"e24f1d1023d8271324050455b762a4f58e34fffe1e2efa5e9136a19f43823ec0"} Feb 26 15:42:53 crc kubenswrapper[4809]: I0226 15:42:53.797936 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zk4jc" podStartSLOduration=4.182541555 podStartE2EDuration="13.797916727s" podCreationTimestamp="2026-02-26 15:42:40 +0000 UTC" firstStartedPulling="2026-02-26 15:42:43.604591417 +0000 UTC m=+5342.077911950" lastFinishedPulling="2026-02-26 15:42:53.219966599 +0000 UTC m=+5351.693287122" observedRunningTime="2026-02-26 15:42:53.796784385 +0000 UTC m=+5352.270104928" watchObservedRunningTime="2026-02-26 15:42:53.797916727 +0000 UTC m=+5352.271237260" Feb 26 15:42:53 crc kubenswrapper[4809]: I0226 15:42:53.839755 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wj6bq" podStartSLOduration=31.074347207 podStartE2EDuration="47.839737516s" podCreationTimestamp="2026-02-26 15:42:06 +0000 UTC" firstStartedPulling="2026-02-26 15:42:36.340474239 +0000 UTC m=+5334.813794762" lastFinishedPulling="2026-02-26 15:42:53.105864548 +0000 UTC m=+5351.579185071" observedRunningTime="2026-02-26 15:42:53.832785838 +0000 UTC m=+5352.306106381" watchObservedRunningTime="2026-02-26 15:42:53.839737516 +0000 UTC m=+5352.313058039" Feb 26 15:42:54 crc kubenswrapper[4809]: I0226 15:42:54.742777 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 15:42:55 crc kubenswrapper[4809]: I0226 15:42:55.034181 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" podUID="4334cfa8-d172-4916-81e2-520ee403cb04" containerName="oauth-openshift" containerID="cri-o://5cf1ade45d3ceee6be81edbd0bd8147ab812e281d37d20c963dc66307d8c5067" gracePeriod=15 Feb 26 15:42:55 crc kubenswrapper[4809]: I0226 15:42:55.797610 4809 generic.go:334] "Generic (PLEG): container finished" podID="4334cfa8-d172-4916-81e2-520ee403cb04" containerID="5cf1ade45d3ceee6be81edbd0bd8147ab812e281d37d20c963dc66307d8c5067" exitCode=0 Feb 26 15:42:55 crc kubenswrapper[4809]: I0226 15:42:55.797650 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" event={"ID":"4334cfa8-d172-4916-81e2-520ee403cb04","Type":"ContainerDied","Data":"5cf1ade45d3ceee6be81edbd0bd8147ab812e281d37d20c963dc66307d8c5067"} Feb 26 15:42:56 crc kubenswrapper[4809]: I0226 15:42:56.815459 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" event={"ID":"4334cfa8-d172-4916-81e2-520ee403cb04","Type":"ContainerStarted","Data":"d623b995b01997daf1374dbb9a3db6bd1c9d1cba3f7fbeb84d33ca799a989f33"} Feb 26 15:42:56 crc kubenswrapper[4809]: I0226 15:42:56.815734 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 15:42:56 crc kubenswrapper[4809]: I0226 15:42:56.881770 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55bd48d569-j44gp" Feb 26 15:42:57 crc kubenswrapper[4809]: E0226 15:42:57.264148 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:42:58 crc kubenswrapper[4809]: I0226 15:42:58.467855 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:58 crc kubenswrapper[4809]: I0226 15:42:58.467924 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:42:59 crc kubenswrapper[4809]: I0226 15:42:59.540665 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" probeResult="failure" output=< Feb 26 15:42:59 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:42:59 crc kubenswrapper[4809]: > Feb 26 15:42:59 crc kubenswrapper[4809]: I0226 15:42:59.906790 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:00 crc kubenswrapper[4809]: I0226 15:43:00.303316 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78c95b4464-fclfm" Feb 26 15:43:00 crc kubenswrapper[4809]: I0226 15:43:00.897280 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:00 crc kubenswrapper[4809]: I0226 15:43:00.897643 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:01 crc kubenswrapper[4809]: I0226 15:43:01.104609 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxdxv" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:01 crc kubenswrapper[4809]: > Feb 26 15:43:01 crc kubenswrapper[4809]: I0226 15:43:01.949769 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zk4jc" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:01 crc kubenswrapper[4809]: > Feb 26 15:43:02 crc kubenswrapper[4809]: E0226 15:43:02.709253 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:04 crc kubenswrapper[4809]: I0226 15:43:04.762740 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:07 crc kubenswrapper[4809]: E0226 15:43:07.326557 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:09 crc kubenswrapper[4809]: I0226 15:43:09.523884 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:09 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:09 crc kubenswrapper[4809]: > Feb 26 15:43:09 crc kubenswrapper[4809]: I0226 15:43:09.760518 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:10 crc kubenswrapper[4809]: I0226 15:43:10.961535 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.025988 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.098296 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxdxv" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:11 crc kubenswrapper[4809]: > Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.793725 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.794070 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.794116 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.795380 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:43:11 crc kubenswrapper[4809]: I0226 15:43:11.795441 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" gracePeriod=600 Feb 26 15:43:11 crc kubenswrapper[4809]: E0226 15:43:11.943592 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.021633 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" exitCode=0 Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.021683 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567"} Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.023114 4809 scope.go:117] "RemoveContainer" containerID="f01cf79fc596dbc7d6211c2dd46cafc80d8432bc5cde1eac361c1990f9515189" Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.023315 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:43:12 crc kubenswrapper[4809]: E0226 15:43:12.023755 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.132510 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:43:12 crc kubenswrapper[4809]: I0226 15:43:12.132750 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zk4jc" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="registry-server" containerID="cri-o://d3af94d72fe0c2dbdf68f6e87546c0db7982307a9294ea949c9a049a8ecd4b7a" gracePeriod=2 Feb 26 15:43:13 crc kubenswrapper[4809]: I0226 15:43:13.095612 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerID="d3af94d72fe0c2dbdf68f6e87546c0db7982307a9294ea949c9a049a8ecd4b7a" exitCode=0 Feb 26 15:43:13 crc kubenswrapper[4809]: I0226 15:43:13.095730 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerDied","Data":"d3af94d72fe0c2dbdf68f6e87546c0db7982307a9294ea949c9a049a8ecd4b7a"} Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.445961 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.491591 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content\") pod \"c4e69056-fdf6-4da0-99bd-173c1235e98f\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.492370 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqvh\" (UniqueName: \"kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh\") pod \"c4e69056-fdf6-4da0-99bd-173c1235e98f\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.492597 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities\") pod \"c4e69056-fdf6-4da0-99bd-173c1235e98f\" (UID: \"c4e69056-fdf6-4da0-99bd-173c1235e98f\") " Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.498614 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities" (OuterVolumeSpecName: "utilities") pod "c4e69056-fdf6-4da0-99bd-173c1235e98f" (UID: "c4e69056-fdf6-4da0-99bd-173c1235e98f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.522099 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh" (OuterVolumeSpecName: "kube-api-access-wlqvh") pod "c4e69056-fdf6-4da0-99bd-173c1235e98f" (UID: "c4e69056-fdf6-4da0-99bd-173c1235e98f"). InnerVolumeSpecName "kube-api-access-wlqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.600414 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.600455 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlqvh\" (UniqueName: \"kubernetes.io/projected/c4e69056-fdf6-4da0-99bd-173c1235e98f-kube-api-access-wlqvh\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.604929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4e69056-fdf6-4da0-99bd-173c1235e98f" (UID: "c4e69056-fdf6-4da0-99bd-173c1235e98f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.702203 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e69056-fdf6-4da0-99bd-173c1235e98f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:14 crc kubenswrapper[4809]: I0226 15:43:14.762046 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.127343 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk4jc" event={"ID":"c4e69056-fdf6-4da0-99bd-173c1235e98f","Type":"ContainerDied","Data":"c25a387635ff03d49c810751866031df32c47760cc2c86fea61fa47f895b0f36"} Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.127413 4809 scope.go:117] "RemoveContainer" containerID="d3af94d72fe0c2dbdf68f6e87546c0db7982307a9294ea949c9a049a8ecd4b7a" Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.127418 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk4jc" Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.164333 4809 scope.go:117] "RemoveContainer" containerID="70d435a0b13e540be45d98cb4104720cb6779ea24b59935b87e0f57068845ec0" Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.165102 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.176460 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zk4jc"] Feb 26 15:43:15 crc kubenswrapper[4809]: I0226 15:43:15.859976 4809 scope.go:117] "RemoveContainer" containerID="3e01d3237030096700853e18076b33ac61097ecc5047ce1a55fe7302c6be5ffa" Feb 26 15:43:16 crc kubenswrapper[4809]: I0226 15:43:16.287326 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" path="/var/lib/kubelet/pods/c4e69056-fdf6-4da0-99bd-173c1235e98f/volumes" Feb 26 15:43:17 crc kubenswrapper[4809]: E0226 15:43:17.635679 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:17 crc kubenswrapper[4809]: E0226 15:43:17.635795 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:19 crc kubenswrapper[4809]: I0226 15:43:19.532921 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:19 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:19 crc kubenswrapper[4809]: > Feb 26 15:43:19 crc kubenswrapper[4809]: I0226 15:43:19.801483 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:20 crc kubenswrapper[4809]: I0226 15:43:20.100446 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:43:20 crc kubenswrapper[4809]: I0226 15:43:20.153720 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:43:20 crc kubenswrapper[4809]: I0226 15:43:20.340615 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.192518 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxdxv" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" containerID="cri-o://289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd" gracePeriod=2 Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.844055 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.989233 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content\") pod \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.989333 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities\") pod \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.989366 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d65dm\" (UniqueName: \"kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm\") pod \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\" (UID: \"6fb2b260-67e9-4ef5-ad49-b5326ce991b7\") " Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.992422 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities" (OuterVolumeSpecName: "utilities") pod "6fb2b260-67e9-4ef5-ad49-b5326ce991b7" (UID: "6fb2b260-67e9-4ef5-ad49-b5326ce991b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:21 crc kubenswrapper[4809]: I0226 15:43:21.998282 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm" (OuterVolumeSpecName: "kube-api-access-d65dm") pod "6fb2b260-67e9-4ef5-ad49-b5326ce991b7" (UID: "6fb2b260-67e9-4ef5-ad49-b5326ce991b7"). InnerVolumeSpecName "kube-api-access-d65dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.093077 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.093110 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d65dm\" (UniqueName: \"kubernetes.io/projected/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-kube-api-access-d65dm\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.124847 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fb2b260-67e9-4ef5-ad49-b5326ce991b7" (UID: "6fb2b260-67e9-4ef5-ad49-b5326ce991b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.195600 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb2b260-67e9-4ef5-ad49-b5326ce991b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.215107 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/3.log" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.218555 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.223099 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.223160 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="854ce24c91733d533bed26c34490087aa405f13b2800517bf6ce5ae62b6975b3" exitCode=137 Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.223255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"854ce24c91733d533bed26c34490087aa405f13b2800517bf6ce5ae62b6975b3"} Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.223298 4809 scope.go:117] "RemoveContainer" containerID="9fd6731a973ac1ef36d3f9a00dcf2810ddc868e361968a526d4227e6f996fcec" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.233107 4809 generic.go:334] "Generic (PLEG): container finished" podID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerID="289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd" exitCode=0 Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.233177 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxdxv" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.233215 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerDied","Data":"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd"} Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.240239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxdxv" event={"ID":"6fb2b260-67e9-4ef5-ad49-b5326ce991b7","Type":"ContainerDied","Data":"7b413e8906c329c9ba9fdd628ed246ce8a602ef451d58f23e24720765e6ae1de"} Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.292769 4809 scope.go:117] "RemoveContainer" containerID="289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.317571 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.331207 4809 scope.go:117] "RemoveContainer" containerID="d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.333325 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxdxv"] Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.421270 4809 scope.go:117] "RemoveContainer" containerID="5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.484059 4809 scope.go:117] "RemoveContainer" containerID="289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.487550 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd\": container with ID starting with 289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd not found: ID does not exist" containerID="289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.487632 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd"} err="failed to get container status \"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd\": rpc error: code = NotFound desc = could not find container \"289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd\": container with ID starting with 289ee45efb80186dc1cafb2c0fc20ac081e8ae6e2b6f65b33679f0d65d399ecd not found: ID does not exist" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.487694 4809 scope.go:117] "RemoveContainer" containerID="d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.488285 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555\": container with ID starting with d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555 not found: ID does not exist" containerID="d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.488320 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555"} err="failed to get container status \"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555\": rpc error: code = NotFound desc = could not find container \"d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555\": container with ID starting with d592e205096f47eeca818c4c5cb288e50351e22ad1f05ee778494d89165e9555 not found: ID does not exist" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.488342 4809 scope.go:117] "RemoveContainer" containerID="5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.488812 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7\": container with ID starting with 5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7 not found: ID does not exist" containerID="5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.488835 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7"} err="failed to get container status \"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7\": rpc error: code = NotFound desc = could not find container \"5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7\": container with ID starting with 5148d0057f2b14bb2ffd83490847ee6806acf1e919a9cf438b6026e1b369a6f7 not found: ID does not exist" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.821208 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824128 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01413d5e-56a4-4b5b-9f12-7baad5eb2c02" containerName="oc" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824175 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="01413d5e-56a4-4b5b-9f12-7baad5eb2c02" containerName="oc" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824209 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="extract-utilities" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824218 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="extract-utilities" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824273 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="extract-content" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824294 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="extract-content" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824333 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="extract-utilities" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824342 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="extract-utilities" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824350 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824357 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824366 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824374 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: E0226 15:43:22.824389 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="extract-content" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.824396 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="extract-content" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.825493 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="01413d5e-56a4-4b5b-9f12-7baad5eb2c02" containerName="oc" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.827584 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e69056-fdf6-4da0-99bd-173c1235e98f" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.827607 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" containerName="registry-server" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.837113 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.916869 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.917055 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f499\" (UniqueName: \"kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.917115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:22 crc kubenswrapper[4809]: I0226 15:43:22.976406 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.019140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f499\" (UniqueName: \"kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.019226 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.019502 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.020485 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.021135 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.055721 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f499\" (UniqueName: \"kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499\") pod \"certified-operators-bbx4l\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.158906 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.263325 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/3.log" Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.266873 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:43:23 crc kubenswrapper[4809]: W0226 15:43:23.794405 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce8c2c31_d1c4_4941_a36e_f24d724d90b7.slice/crio-2d1c6cfe5ec15745cfdd4e5aee0cbb8de4c46e0a1121a4778370aa5e0072da70 WatchSource:0}: Error finding container 2d1c6cfe5ec15745cfdd4e5aee0cbb8de4c46e0a1121a4778370aa5e0072da70: Status 404 returned error can't find the container with id 2d1c6cfe5ec15745cfdd4e5aee0cbb8de4c46e0a1121a4778370aa5e0072da70 Feb 26 15:43:23 crc kubenswrapper[4809]: I0226 15:43:23.796698 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.281195 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb2b260-67e9-4ef5-ad49-b5326ce991b7" path="/var/lib/kubelet/pods/6fb2b260-67e9-4ef5-ad49-b5326ce991b7/volumes" Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.289601 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/3.log" Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.291564 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.291682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e258df93d141eeb3b69951f4b39ddc0a0ec2b3a3eb1c2e62d1979f47c05b096"} Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.293848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerStarted","Data":"ce87a3442539d4e0fbcbbb07d78b6191a7db80dc9bb5727aca63c235f0e2d369"} Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.293890 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerStarted","Data":"2d1c6cfe5ec15745cfdd4e5aee0cbb8de4c46e0a1121a4778370aa5e0072da70"} Feb 26 15:43:24 crc kubenswrapper[4809]: I0226 15:43:24.762101 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:25 crc kubenswrapper[4809]: I0226 15:43:25.305485 4809 generic.go:334] "Generic (PLEG): container finished" podID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerID="ce87a3442539d4e0fbcbbb07d78b6191a7db80dc9bb5727aca63c235f0e2d369" exitCode=0 Feb 26 15:43:25 crc kubenswrapper[4809]: I0226 15:43:25.305536 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerDied","Data":"ce87a3442539d4e0fbcbbb07d78b6191a7db80dc9bb5727aca63c235f0e2d369"} Feb 26 15:43:25 crc kubenswrapper[4809]: I0226 15:43:25.311341 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:43:26 crc kubenswrapper[4809]: I0226 15:43:26.257628 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:43:26 crc kubenswrapper[4809]: E0226 15:43:26.258558 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:43:27 crc kubenswrapper[4809]: I0226 15:43:27.330495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerStarted","Data":"6ef844c5832ed37f7a66163f0b949b3f8babc33b025f139a3598ecce8706d03b"} Feb 26 15:43:28 crc kubenswrapper[4809]: E0226 15:43:28.000941 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:29 crc kubenswrapper[4809]: I0226 15:43:29.527704 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:29 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:29 crc kubenswrapper[4809]: > Feb 26 15:43:29 crc kubenswrapper[4809]: I0226 15:43:29.766447 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:31 crc kubenswrapper[4809]: I0226 15:43:31.172345 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:43:31 crc kubenswrapper[4809]: I0226 15:43:31.818600 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:43:31 crc kubenswrapper[4809]: I0226 15:43:31.825424 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:43:32 crc kubenswrapper[4809]: I0226 15:43:32.392611 4809 generic.go:334] "Generic (PLEG): container finished" podID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerID="6ef844c5832ed37f7a66163f0b949b3f8babc33b025f139a3598ecce8706d03b" exitCode=0 Feb 26 15:43:32 crc kubenswrapper[4809]: I0226 15:43:32.392992 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerDied","Data":"6ef844c5832ed37f7a66163f0b949b3f8babc33b025f139a3598ecce8706d03b"} Feb 26 15:43:32 crc kubenswrapper[4809]: E0226 15:43:32.502274 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:33 crc kubenswrapper[4809]: I0226 15:43:33.421350 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerStarted","Data":"6f4766494cdc85e2387d6735cf70a89c474475e17516afd958346e2224bdc246"} Feb 26 15:43:33 crc kubenswrapper[4809]: I0226 15:43:33.442033 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbx4l" podStartSLOduration=3.812379091 podStartE2EDuration="11.441992842s" podCreationTimestamp="2026-02-26 15:43:22 +0000 UTC" firstStartedPulling="2026-02-26 15:43:25.307807887 +0000 UTC m=+5383.781128410" lastFinishedPulling="2026-02-26 15:43:32.937421638 +0000 UTC m=+5391.410742161" observedRunningTime="2026-02-26 15:43:33.43877299 +0000 UTC m=+5391.912093553" watchObservedRunningTime="2026-02-26 15:43:33.441992842 +0000 UTC m=+5391.915313375" Feb 26 15:43:34 crc kubenswrapper[4809]: I0226 15:43:34.776076 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:35 crc kubenswrapper[4809]: I0226 15:43:35.443296 4809 generic.go:334] "Generic (PLEG): container finished" podID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" containerID="7fa5e4884e447a13c4ab82662cf5732bebc5c003e17fece4b5345b59662c6ffc" exitCode=1 Feb 26 15:43:35 crc kubenswrapper[4809]: I0226 15:43:35.443364 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5dc4d14b-07db-462f-9fb4-8a00eb3452be","Type":"ContainerDied","Data":"7fa5e4884e447a13c4ab82662cf5732bebc5c003e17fece4b5345b59662c6ffc"} Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.274927 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296472 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8ztj\" (UniqueName: \"kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296662 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296716 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296767 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296829 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296852 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.296958 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.297005 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.297040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret\") pod \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\" (UID: \"5dc4d14b-07db-462f-9fb4-8a00eb3452be\") " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.303415 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data" (OuterVolumeSpecName: "config-data") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.335387 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.338507 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.341663 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj" (OuterVolumeSpecName: "kube-api-access-v8ztj") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "kube-api-access-v8ztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.359536 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.366625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.378534 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.389913 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.403858 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.403903 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.403917 4809 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.403930 4809 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5dc4d14b-07db-462f-9fb4-8a00eb3452be-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.403943 4809 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.407422 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.407477 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.407494 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8ztj\" (UniqueName: \"kubernetes.io/projected/5dc4d14b-07db-462f-9fb4-8a00eb3452be-kube-api-access-v8ztj\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.424455 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5dc4d14b-07db-462f-9fb4-8a00eb3452be" (UID: "5dc4d14b-07db-462f-9fb4-8a00eb3452be"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.489349 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5dc4d14b-07db-462f-9fb4-8a00eb3452be","Type":"ContainerDied","Data":"2413499fb3bec5e2b94d915f0ded4d82274bc01d2ee807489caecb7063e2052f"} Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.489388 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2413499fb3bec5e2b94d915f0ded4d82274bc01d2ee807489caecb7063e2052f" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.489453 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.490882 4809 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.510187 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5dc4d14b-07db-462f-9fb4-8a00eb3452be-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:37 crc kubenswrapper[4809]: I0226 15:43:37.510219 4809 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:38 crc kubenswrapper[4809]: I0226 15:43:38.256712 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:43:38 crc kubenswrapper[4809]: E0226 15:43:38.257298 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:43:38 crc kubenswrapper[4809]: E0226 15:43:38.450477 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice/crio-01ed3cecafd56c9a44b27dd6d214f1dd6dfcdc1464e2712a6ee895d577301fae\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01413d5e_56a4_4b5b_9f12_7baad5eb2c02.slice\": RecentStats: unable to find data in memory cache]" Feb 26 15:43:39 crc kubenswrapper[4809]: I0226 15:43:39.521276 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" probeResult="failure" output=< Feb 26 15:43:39 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:43:39 crc kubenswrapper[4809]: > Feb 26 15:43:39 crc kubenswrapper[4809]: I0226 15:43:39.773458 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:41 crc kubenswrapper[4809]: I0226 15:43:41.175584 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.159517 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.160026 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.207418 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.615742 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.851616 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:43:43 crc kubenswrapper[4809]: E0226 15:43:43.852457 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.852474 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.852714 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc4d14b-07db-462f-9fb4-8a00eb3452be" containerName="tempest-tests-tempest-tests-runner" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.853564 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.862190 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7fd68" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.864735 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.952468 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:43 crc kubenswrapper[4809]: I0226 15:43:43.952584 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2qg\" (UniqueName: \"kubernetes.io/projected/b12561a8-3ff6-42aa-81ae-a7a8b304c6c0-kube-api-access-ng2qg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.054906 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng2qg\" (UniqueName: \"kubernetes.io/projected/b12561a8-3ff6-42aa-81ae-a7a8b304c6c0-kube-api-access-ng2qg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.055220 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.057094 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.057683 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.074489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng2qg\" (UniqueName: \"kubernetes.io/projected/b12561a8-3ff6-42aa-81ae-a7a8b304c6c0-kube-api-access-ng2qg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.097840 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.183834 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.714680 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 15:43:44 crc kubenswrapper[4809]: W0226 15:43:44.727394 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb12561a8_3ff6_42aa_81ae_a7a8b304c6c0.slice/crio-d41c81d3d9bc5cfdae78895163e9ee4cc238155a07eb371e00d4d5f2eb5e5fb5 WatchSource:0}: Error finding container d41c81d3d9bc5cfdae78895163e9ee4cc238155a07eb371e00d4d5f2eb5e5fb5: Status 404 returned error can't find the container with id d41c81d3d9bc5cfdae78895163e9ee4cc238155a07eb371e00d4d5f2eb5e5fb5 Feb 26 15:43:44 crc kubenswrapper[4809]: I0226 15:43:44.763943 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:45 crc kubenswrapper[4809]: I0226 15:43:45.589705 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0","Type":"ContainerStarted","Data":"d41c81d3d9bc5cfdae78895163e9ee4cc238155a07eb371e00d4d5f2eb5e5fb5"} Feb 26 15:43:45 crc kubenswrapper[4809]: I0226 15:43:45.591145 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbx4l" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="registry-server" containerID="cri-o://6f4766494cdc85e2387d6735cf70a89c474475e17516afd958346e2224bdc246" gracePeriod=2 Feb 26 15:43:46 crc kubenswrapper[4809]: I0226 15:43:46.692978 4809 generic.go:334] "Generic (PLEG): container finished" podID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerID="6f4766494cdc85e2387d6735cf70a89c474475e17516afd958346e2224bdc246" exitCode=0 Feb 26 15:43:46 crc kubenswrapper[4809]: I0226 15:43:46.693164 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerDied","Data":"6f4766494cdc85e2387d6735cf70a89c474475e17516afd958346e2224bdc246"} Feb 26 15:43:46 crc kubenswrapper[4809]: I0226 15:43:46.961562 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.068199 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content\") pod \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.068328 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities\") pod \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.068395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f499\" (UniqueName: \"kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499\") pod \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\" (UID: \"ce8c2c31-d1c4-4941-a36e-f24d724d90b7\") " Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.069334 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities" (OuterVolumeSpecName: "utilities") pod "ce8c2c31-d1c4-4941-a36e-f24d724d90b7" (UID: "ce8c2c31-d1c4-4941-a36e-f24d724d90b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.086097 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499" (OuterVolumeSpecName: "kube-api-access-7f499") pod "ce8c2c31-d1c4-4941-a36e-f24d724d90b7" (UID: "ce8c2c31-d1c4-4941-a36e-f24d724d90b7"). InnerVolumeSpecName "kube-api-access-7f499". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.106482 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce8c2c31-d1c4-4941-a36e-f24d724d90b7" (UID: "ce8c2c31-d1c4-4941-a36e-f24d724d90b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.171428 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.171472 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f499\" (UniqueName: \"kubernetes.io/projected/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-kube-api-access-7f499\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.171484 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce8c2c31-d1c4-4941-a36e-f24d724d90b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.708373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbx4l" event={"ID":"ce8c2c31-d1c4-4941-a36e-f24d724d90b7","Type":"ContainerDied","Data":"2d1c6cfe5ec15745cfdd4e5aee0cbb8de4c46e0a1121a4778370aa5e0072da70"} Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.708720 4809 scope.go:117] "RemoveContainer" containerID="6f4766494cdc85e2387d6735cf70a89c474475e17516afd958346e2224bdc246" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.708441 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbx4l" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.714356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b12561a8-3ff6-42aa-81ae-a7a8b304c6c0","Type":"ContainerStarted","Data":"a5c0a787c7122b461f02f055726649d0c8a27dcab0e6a299a7beed9772bd6bd9"} Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.737600 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.003112419 podStartE2EDuration="4.737576393s" podCreationTimestamp="2026-02-26 15:43:43 +0000 UTC" firstStartedPulling="2026-02-26 15:43:44.730915898 +0000 UTC m=+5403.204236431" lastFinishedPulling="2026-02-26 15:43:46.465379882 +0000 UTC m=+5404.938700405" observedRunningTime="2026-02-26 15:43:47.729886525 +0000 UTC m=+5406.203207038" watchObservedRunningTime="2026-02-26 15:43:47.737576393 +0000 UTC m=+5406.210896916" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.738695 4809 scope.go:117] "RemoveContainer" containerID="6ef844c5832ed37f7a66163f0b949b3f8babc33b025f139a3598ecce8706d03b" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.789356 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.795358 4809 scope.go:117] "RemoveContainer" containerID="ce87a3442539d4e0fbcbbb07d78b6191a7db80dc9bb5727aca63c235f0e2d369" Feb 26 15:43:47 crc kubenswrapper[4809]: I0226 15:43:47.833352 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbx4l"] Feb 26 15:43:48 crc kubenswrapper[4809]: I0226 15:43:48.277623 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" path="/var/lib/kubelet/pods/ce8c2c31-d1c4-4941-a36e-f24d724d90b7/volumes" Feb 26 15:43:48 crc kubenswrapper[4809]: I0226 15:43:48.372048 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 15:43:48 crc kubenswrapper[4809]: I0226 15:43:48.541072 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 15:43:48 crc kubenswrapper[4809]: I0226 15:43:48.596877 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:43:48 crc kubenswrapper[4809]: I0226 15:43:48.680381 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:43:49 crc kubenswrapper[4809]: I0226 15:43:49.256682 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:43:49 crc kubenswrapper[4809]: E0226 15:43:49.258507 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:43:49 crc kubenswrapper[4809]: I0226 15:43:49.767164 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74917c3f-f22d-43b0-9fbf-6473cb9c6c9d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 15:43:49 crc kubenswrapper[4809]: I0226 15:43:49.825360 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 15:43:49 crc kubenswrapper[4809]: I0226 15:43:49.940300 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 15:43:50 crc kubenswrapper[4809]: I0226 15:43:50.458700 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:43:50 crc kubenswrapper[4809]: I0226 15:43:50.459771 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wj6bq" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" containerID="cri-o://e24f1d1023d8271324050455b762a4f58e34fffe1e2efa5e9136a19f43823ec0" gracePeriod=2 Feb 26 15:43:50 crc kubenswrapper[4809]: I0226 15:43:50.764503 4809 generic.go:334] "Generic (PLEG): container finished" podID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerID="e24f1d1023d8271324050455b762a4f58e34fffe1e2efa5e9136a19f43823ec0" exitCode=0 Feb 26 15:43:50 crc kubenswrapper[4809]: I0226 15:43:50.764782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerDied","Data":"e24f1d1023d8271324050455b762a4f58e34fffe1e2efa5e9136a19f43823ec0"} Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.073759 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.200056 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcjxz\" (UniqueName: \"kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz\") pod \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.200415 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content\") pod \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.200483 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities\") pod \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\" (UID: \"28ea2153-1aae-4065-94f2-138ffbfa4cf8\") " Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.215875 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz" (OuterVolumeSpecName: "kube-api-access-hcjxz") pod "28ea2153-1aae-4065-94f2-138ffbfa4cf8" (UID: "28ea2153-1aae-4065-94f2-138ffbfa4cf8"). InnerVolumeSpecName "kube-api-access-hcjxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.226120 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities" (OuterVolumeSpecName: "utilities") pod "28ea2153-1aae-4065-94f2-138ffbfa4cf8" (UID: "28ea2153-1aae-4065-94f2-138ffbfa4cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.304211 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.304243 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcjxz\" (UniqueName: \"kubernetes.io/projected/28ea2153-1aae-4065-94f2-138ffbfa4cf8-kube-api-access-hcjxz\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.449605 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28ea2153-1aae-4065-94f2-138ffbfa4cf8" (UID: "28ea2153-1aae-4065-94f2-138ffbfa4cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.512986 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ea2153-1aae-4065-94f2-138ffbfa4cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.781703 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj6bq" event={"ID":"28ea2153-1aae-4065-94f2-138ffbfa4cf8","Type":"ContainerDied","Data":"98b5a4629dca8914e3c7e6b89fbdeb91b9c8467826b351447ab15020bed6e73e"} Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.781945 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj6bq" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.782114 4809 scope.go:117] "RemoveContainer" containerID="e24f1d1023d8271324050455b762a4f58e34fffe1e2efa5e9136a19f43823ec0" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.823744 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.837937 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wj6bq"] Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.840331 4809 scope.go:117] "RemoveContainer" containerID="7272da7e61e70399e54596b82204db2354a7cbcf550e96d04bbe3936dbab3c6e" Feb 26 15:43:51 crc kubenswrapper[4809]: I0226 15:43:51.876296 4809 scope.go:117] "RemoveContainer" containerID="2671cf5ef19ef5504dbeafe162200cd1eb172e08c3b0b0aa713b845b7e7fb85f" Feb 26 15:43:52 crc kubenswrapper[4809]: I0226 15:43:52.275979 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" path="/var/lib/kubelet/pods/28ea2153-1aae-4065-94f2-138ffbfa4cf8/volumes" Feb 26 15:43:52 crc kubenswrapper[4809]: I0226 15:43:52.914147 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535336-dszbd"] Feb 26 15:43:52 crc kubenswrapper[4809]: I0226 15:43:52.926885 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535336-dszbd"] Feb 26 15:43:54 crc kubenswrapper[4809]: I0226 15:43:54.271489 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eadd52d0-ffe0-4f9b-a2b4-6634866384ca" path="/var/lib/kubelet/pods/eadd52d0-ffe0-4f9b-a2b4-6634866384ca/volumes" Feb 26 15:43:54 crc kubenswrapper[4809]: I0226 15:43:54.779833 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.209068 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535344-r84gt"] Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211661 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="extract-utilities" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211678 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="extract-utilities" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211707 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211713 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211728 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="extract-content" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211735 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="extract-content" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211746 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211751 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211772 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="extract-utilities" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211782 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="extract-utilities" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.211794 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="extract-content" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.211801 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="extract-content" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.212099 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ea2153-1aae-4065-94f2-138ffbfa4cf8" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.212124 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8c2c31-d1c4-4941-a36e-f24d724d90b7" containerName="registry-server" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.213223 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.227666 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535344-r84gt"] Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.263475 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:44:00 crc kubenswrapper[4809]: E0226 15:44:00.263702 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.351829 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.354042 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.354252 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.378708 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggbt\" (UniqueName: \"kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt\") pod \"auto-csr-approver-29535344-r84gt\" (UID: \"63b2e435-d27d-443e-81b6-59a4260eea4d\") " pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.481268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggbt\" (UniqueName: \"kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt\") pod \"auto-csr-approver-29535344-r84gt\" (UID: \"63b2e435-d27d-443e-81b6-59a4260eea4d\") " pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.508488 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggbt\" (UniqueName: \"kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt\") pod \"auto-csr-approver-29535344-r84gt\" (UID: \"63b2e435-d27d-443e-81b6-59a4260eea4d\") " pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:00 crc kubenswrapper[4809]: I0226 15:44:00.620413 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:01 crc kubenswrapper[4809]: I0226 15:44:01.429835 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535344-r84gt"] Feb 26 15:44:01 crc kubenswrapper[4809]: I0226 15:44:01.901631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535344-r84gt" event={"ID":"63b2e435-d27d-443e-81b6-59a4260eea4d","Type":"ContainerStarted","Data":"0b6ce0c683fc01b8a14056c147e1925d8b9569f771f58f729b70f85f47cf579c"} Feb 26 15:44:04 crc kubenswrapper[4809]: I0226 15:44:04.939574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535344-r84gt" event={"ID":"63b2e435-d27d-443e-81b6-59a4260eea4d","Type":"ContainerStarted","Data":"846f97919824b12b2c76464e3cf1afb84c5c3c250d60bb32812c2a4a3e61a411"} Feb 26 15:44:04 crc kubenswrapper[4809]: I0226 15:44:04.963693 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535344-r84gt" podStartSLOduration=3.599764473 podStartE2EDuration="4.963669149s" podCreationTimestamp="2026-02-26 15:44:00 +0000 UTC" firstStartedPulling="2026-02-26 15:44:01.450672202 +0000 UTC m=+5419.923992745" lastFinishedPulling="2026-02-26 15:44:02.814576898 +0000 UTC m=+5421.287897421" observedRunningTime="2026-02-26 15:44:04.953418208 +0000 UTC m=+5423.426738731" watchObservedRunningTime="2026-02-26 15:44:04.963669149 +0000 UTC m=+5423.436989672" Feb 26 15:44:05 crc kubenswrapper[4809]: I0226 15:44:05.954984 4809 generic.go:334] "Generic (PLEG): container finished" podID="63b2e435-d27d-443e-81b6-59a4260eea4d" containerID="846f97919824b12b2c76464e3cf1afb84c5c3c250d60bb32812c2a4a3e61a411" exitCode=0 Feb 26 15:44:05 crc kubenswrapper[4809]: I0226 15:44:05.955075 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535344-r84gt" event={"ID":"63b2e435-d27d-443e-81b6-59a4260eea4d","Type":"ContainerDied","Data":"846f97919824b12b2c76464e3cf1afb84c5c3c250d60bb32812c2a4a3e61a411"} Feb 26 15:44:07 crc kubenswrapper[4809]: I0226 15:44:07.438739 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:07 crc kubenswrapper[4809]: I0226 15:44:07.581685 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ggbt\" (UniqueName: \"kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt\") pod \"63b2e435-d27d-443e-81b6-59a4260eea4d\" (UID: \"63b2e435-d27d-443e-81b6-59a4260eea4d\") " Feb 26 15:44:07 crc kubenswrapper[4809]: I0226 15:44:07.592924 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt" (OuterVolumeSpecName: "kube-api-access-2ggbt") pod "63b2e435-d27d-443e-81b6-59a4260eea4d" (UID: "63b2e435-d27d-443e-81b6-59a4260eea4d"). InnerVolumeSpecName "kube-api-access-2ggbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:44:07 crc kubenswrapper[4809]: I0226 15:44:07.685932 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ggbt\" (UniqueName: \"kubernetes.io/projected/63b2e435-d27d-443e-81b6-59a4260eea4d-kube-api-access-2ggbt\") on node \"crc\" DevicePath \"\"" Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.005986 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535344-r84gt" event={"ID":"63b2e435-d27d-443e-81b6-59a4260eea4d","Type":"ContainerDied","Data":"0b6ce0c683fc01b8a14056c147e1925d8b9569f771f58f729b70f85f47cf579c"} Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.006090 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6ce0c683fc01b8a14056c147e1925d8b9569f771f58f729b70f85f47cf579c" Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.006512 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535344-r84gt" Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.050655 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535338-hcpvt"] Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.062946 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535338-hcpvt"] Feb 26 15:44:08 crc kubenswrapper[4809]: I0226 15:44:08.276286 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c1faaf-c2b3-4865-9464-ecfc12cd42c2" path="/var/lib/kubelet/pods/a3c1faaf-c2b3-4865-9464-ecfc12cd42c2/volumes" Feb 26 15:44:13 crc kubenswrapper[4809]: I0226 15:44:13.257756 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:44:13 crc kubenswrapper[4809]: E0226 15:44:13.258555 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:44:16 crc kubenswrapper[4809]: I0226 15:44:16.620346 4809 scope.go:117] "RemoveContainer" containerID="e29cdc9862ddab45a93bed5275461c9df103ed6ae8900ebced0f0facda0e47c3" Feb 26 15:44:16 crc kubenswrapper[4809]: I0226 15:44:16.826203 4809 scope.go:117] "RemoveContainer" containerID="091f1dfeb9f8e87f73d982feab73743171e4c6294a02f6ef8abe110d929f5bee" Feb 26 15:44:26 crc kubenswrapper[4809]: I0226 15:44:26.257604 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:44:26 crc kubenswrapper[4809]: E0226 15:44:26.258634 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.616285 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwnrr/must-gather-vtqpv"] Feb 26 15:44:35 crc kubenswrapper[4809]: E0226 15:44:35.631652 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b2e435-d27d-443e-81b6-59a4260eea4d" containerName="oc" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.631718 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b2e435-d27d-443e-81b6-59a4260eea4d" containerName="oc" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.632268 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b2e435-d27d-443e-81b6-59a4260eea4d" containerName="oc" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.656948 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.666514 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwnrr"/"kube-root-ca.crt" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.666843 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwnrr"/"openshift-service-ca.crt" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.729549 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwnrr/must-gather-vtqpv"] Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.799210 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.799400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2fqw\" (UniqueName: \"kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.902768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.902968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2fqw\" (UniqueName: \"kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.903342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:35 crc kubenswrapper[4809]: I0226 15:44:35.929418 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2fqw\" (UniqueName: \"kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw\") pod \"must-gather-vtqpv\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:36 crc kubenswrapper[4809]: I0226 15:44:36.041267 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:44:36 crc kubenswrapper[4809]: I0226 15:44:36.676474 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwnrr/must-gather-vtqpv"] Feb 26 15:44:37 crc kubenswrapper[4809]: I0226 15:44:37.379532 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" event={"ID":"19517d8a-cde4-45ff-88e0-4026e339e2d3","Type":"ContainerStarted","Data":"0167b9a0102f129bc9dcb7cd5764454dc5f5c4369012d60f9e2d46b2b7dcdcf0"} Feb 26 15:44:41 crc kubenswrapper[4809]: I0226 15:44:41.256912 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:44:41 crc kubenswrapper[4809]: E0226 15:44:41.377246 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:44:48 crc kubenswrapper[4809]: I0226 15:44:48.608934 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" event={"ID":"19517d8a-cde4-45ff-88e0-4026e339e2d3","Type":"ContainerStarted","Data":"c706dbb69b0785a81125588f1670c65372479ebcd778bb9948d39ad4304e4c56"} Feb 26 15:44:48 crc kubenswrapper[4809]: I0226 15:44:48.609490 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" event={"ID":"19517d8a-cde4-45ff-88e0-4026e339e2d3","Type":"ContainerStarted","Data":"e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12"} Feb 26 15:44:48 crc kubenswrapper[4809]: I0226 15:44:48.630541 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" podStartSLOduration=2.98223677 podStartE2EDuration="13.630524595s" podCreationTimestamp="2026-02-26 15:44:35 +0000 UTC" firstStartedPulling="2026-02-26 15:44:36.670881227 +0000 UTC m=+5455.144201750" lastFinishedPulling="2026-02-26 15:44:47.319169052 +0000 UTC m=+5465.792489575" observedRunningTime="2026-02-26 15:44:48.624458613 +0000 UTC m=+5467.097779136" watchObservedRunningTime="2026-02-26 15:44:48.630524595 +0000 UTC m=+5467.103845118" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.089990 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-vr2pp"] Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.092328 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.095169 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwnrr"/"default-dockercfg-jxsvd" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.185783 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwf2\" (UniqueName: \"kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.185890 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.288318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwf2\" (UniqueName: \"kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.288390 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.290395 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.316303 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwf2\" (UniqueName: \"kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2\") pod \"crc-debug-vr2pp\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.420070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:44:54 crc kubenswrapper[4809]: I0226 15:44:54.678852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" event={"ID":"f6ce2764-18f0-4a15-80f9-be57eb532afc","Type":"ContainerStarted","Data":"89f048db6c3e47c225b5bcd7b4e62df579a0fd96c647f1db2d80d0e9469bd275"} Feb 26 15:44:56 crc kubenswrapper[4809]: I0226 15:44:56.257678 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:44:56 crc kubenswrapper[4809]: E0226 15:44:56.258228 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.233566 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4"] Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.236327 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.247969 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.248208 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.287969 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.288131 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr6x\" (UniqueName: \"kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.288229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.309613 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4"] Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.390405 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbr6x\" (UniqueName: \"kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.390597 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.390971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.392200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.400362 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.410286 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbr6x\" (UniqueName: \"kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x\") pod \"collect-profiles-29535345-pmbh4\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:00 crc kubenswrapper[4809]: I0226 15:45:00.573343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:01 crc kubenswrapper[4809]: I0226 15:45:01.365553 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4"] Feb 26 15:45:07 crc kubenswrapper[4809]: I0226 15:45:07.257725 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:45:07 crc kubenswrapper[4809]: E0226 15:45:07.258758 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:45:10 crc kubenswrapper[4809]: E0226 15:45:10.355903 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 26 15:45:10 crc kubenswrapper[4809]: E0226 15:45:10.360666 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-vr2pp_openshift-must-gather-vwnrr(f6ce2764-18f0-4a15-80f9-be57eb532afc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 15:45:10 crc kubenswrapper[4809]: E0226 15:45:10.361903 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" Feb 26 15:45:10 crc kubenswrapper[4809]: I0226 15:45:10.901617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" event={"ID":"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9","Type":"ContainerStarted","Data":"c1c874b6920f6830a4e77d8061a0e871cb853cb595605aa6a40642bba8b8a8f6"} Feb 26 15:45:10 crc kubenswrapper[4809]: I0226 15:45:10.902258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" event={"ID":"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9","Type":"ContainerStarted","Data":"99ca4a582b782e6baa5579ee3ff657d455450296a9349ead5e7e8170b80081d5"} Feb 26 15:45:10 crc kubenswrapper[4809]: E0226 15:45:10.903567 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" Feb 26 15:45:10 crc kubenswrapper[4809]: I0226 15:45:10.948397 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" podStartSLOduration=10.948371858 podStartE2EDuration="10.948371858s" podCreationTimestamp="2026-02-26 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:45:10.938613981 +0000 UTC m=+5489.411934514" watchObservedRunningTime="2026-02-26 15:45:10.948371858 +0000 UTC m=+5489.421692391" Feb 26 15:45:12 crc kubenswrapper[4809]: I0226 15:45:12.086105 4809 generic.go:334] "Generic (PLEG): container finished" podID="5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" containerID="c1c874b6920f6830a4e77d8061a0e871cb853cb595605aa6a40642bba8b8a8f6" exitCode=0 Feb 26 15:45:12 crc kubenswrapper[4809]: I0226 15:45:12.086204 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" event={"ID":"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9","Type":"ContainerDied","Data":"c1c874b6920f6830a4e77d8061a0e871cb853cb595605aa6a40642bba8b8a8f6"} Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.616787 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.763275 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume\") pod \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.763504 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbr6x\" (UniqueName: \"kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x\") pod \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.763593 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume\") pod \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\" (UID: \"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9\") " Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.764899 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" (UID: "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.789883 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" (UID: "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.794219 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x" (OuterVolumeSpecName: "kube-api-access-hbr6x") pod "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" (UID: "5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9"). InnerVolumeSpecName "kube-api-access-hbr6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.866786 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbr6x\" (UniqueName: \"kubernetes.io/projected/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-kube-api-access-hbr6x\") on node \"crc\" DevicePath \"\"" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.866824 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:45:13 crc kubenswrapper[4809]: I0226 15:45:13.866836 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.030714 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b"] Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.043212 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535300-bqd9b"] Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.111390 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" event={"ID":"5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9","Type":"ContainerDied","Data":"99ca4a582b782e6baa5579ee3ff657d455450296a9349ead5e7e8170b80081d5"} Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.111440 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ca4a582b782e6baa5579ee3ff657d455450296a9349ead5e7e8170b80081d5" Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.111501 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535345-pmbh4" Feb 26 15:45:14 crc kubenswrapper[4809]: I0226 15:45:14.275180 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19430f16-b502-4902-8fd4-d0dabd493d3d" path="/var/lib/kubelet/pods/19430f16-b502-4902-8fd4-d0dabd493d3d/volumes" Feb 26 15:45:17 crc kubenswrapper[4809]: I0226 15:45:17.063848 4809 scope.go:117] "RemoveContainer" containerID="9f1761c808b490e58720fbbf4ecc7951b78f26a4033ea73ce904abbb4a7990c3" Feb 26 15:45:19 crc kubenswrapper[4809]: I0226 15:45:19.256646 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:45:19 crc kubenswrapper[4809]: E0226 15:45:19.257232 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:45:23 crc kubenswrapper[4809]: I0226 15:45:23.223675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" event={"ID":"f6ce2764-18f0-4a15-80f9-be57eb532afc","Type":"ContainerStarted","Data":"033170ce8ce944ef7c17cd0ec60ab7afbd6bf80f4be1ec964b3432b86f078288"} Feb 26 15:45:23 crc kubenswrapper[4809]: I0226 15:45:23.244846 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" podStartSLOduration=0.975855483 podStartE2EDuration="29.244829465s" podCreationTimestamp="2026-02-26 15:44:54 +0000 UTC" firstStartedPulling="2026-02-26 15:44:54.472624157 +0000 UTC m=+5472.945944680" lastFinishedPulling="2026-02-26 15:45:22.741598149 +0000 UTC m=+5501.214918662" observedRunningTime="2026-02-26 15:45:23.239526104 +0000 UTC m=+5501.712846637" watchObservedRunningTime="2026-02-26 15:45:23.244829465 +0000 UTC m=+5501.718149988" Feb 26 15:45:31 crc kubenswrapper[4809]: I0226 15:45:31.259005 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:45:31 crc kubenswrapper[4809]: E0226 15:45:31.260554 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:45:45 crc kubenswrapper[4809]: I0226 15:45:45.257584 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:45:45 crc kubenswrapper[4809]: E0226 15:45:45.258423 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:45:58 crc kubenswrapper[4809]: I0226 15:45:58.257057 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:45:58 crc kubenswrapper[4809]: E0226 15:45:58.257981 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.156910 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535346-wgg47"] Feb 26 15:46:00 crc kubenswrapper[4809]: E0226 15:46:00.157724 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" containerName="collect-profiles" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.157738 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" containerName="collect-profiles" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.157975 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5296ffa8-c7e5-44e7-b9dc-5346fb4ad5a9" containerName="collect-profiles" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.159703 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.161900 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.162253 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.162274 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.173279 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535346-wgg47"] Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.278420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25rb\" (UniqueName: \"kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb\") pod \"auto-csr-approver-29535346-wgg47\" (UID: \"97b07699-ff45-4b3a-a61c-b8eccdaa792a\") " pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.381310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25rb\" (UniqueName: \"kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb\") pod \"auto-csr-approver-29535346-wgg47\" (UID: \"97b07699-ff45-4b3a-a61c-b8eccdaa792a\") " pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.416103 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25rb\" (UniqueName: \"kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb\") pod \"auto-csr-approver-29535346-wgg47\" (UID: \"97b07699-ff45-4b3a-a61c-b8eccdaa792a\") " pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:00 crc kubenswrapper[4809]: I0226 15:46:00.484159 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:01 crc kubenswrapper[4809]: I0226 15:46:01.323909 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535346-wgg47"] Feb 26 15:46:01 crc kubenswrapper[4809]: I0226 15:46:01.709922 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535346-wgg47" event={"ID":"97b07699-ff45-4b3a-a61c-b8eccdaa792a","Type":"ContainerStarted","Data":"9c3564fdc7f72dfc5696e0bec684f35a974859556d51919921b87df3c6c88240"} Feb 26 15:46:03 crc kubenswrapper[4809]: I0226 15:46:03.738950 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535346-wgg47" event={"ID":"97b07699-ff45-4b3a-a61c-b8eccdaa792a","Type":"ContainerStarted","Data":"a49ee7b379f836f32327f1a5a92222d852b513d8f45ab98bbc6505547d8eed63"} Feb 26 15:46:03 crc kubenswrapper[4809]: I0226 15:46:03.755561 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535346-wgg47" podStartSLOduration=2.717249787 podStartE2EDuration="3.755543723s" podCreationTimestamp="2026-02-26 15:46:00 +0000 UTC" firstStartedPulling="2026-02-26 15:46:01.336449068 +0000 UTC m=+5539.809769591" lastFinishedPulling="2026-02-26 15:46:02.374743004 +0000 UTC m=+5540.848063527" observedRunningTime="2026-02-26 15:46:03.753443463 +0000 UTC m=+5542.226763986" watchObservedRunningTime="2026-02-26 15:46:03.755543723 +0000 UTC m=+5542.228864256" Feb 26 15:46:05 crc kubenswrapper[4809]: I0226 15:46:05.759715 4809 generic.go:334] "Generic (PLEG): container finished" podID="97b07699-ff45-4b3a-a61c-b8eccdaa792a" containerID="a49ee7b379f836f32327f1a5a92222d852b513d8f45ab98bbc6505547d8eed63" exitCode=0 Feb 26 15:46:05 crc kubenswrapper[4809]: I0226 15:46:05.759805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535346-wgg47" event={"ID":"97b07699-ff45-4b3a-a61c-b8eccdaa792a","Type":"ContainerDied","Data":"a49ee7b379f836f32327f1a5a92222d852b513d8f45ab98bbc6505547d8eed63"} Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.197539 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.212236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25rb\" (UniqueName: \"kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb\") pod \"97b07699-ff45-4b3a-a61c-b8eccdaa792a\" (UID: \"97b07699-ff45-4b3a-a61c-b8eccdaa792a\") " Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.221380 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb" (OuterVolumeSpecName: "kube-api-access-d25rb") pod "97b07699-ff45-4b3a-a61c-b8eccdaa792a" (UID: "97b07699-ff45-4b3a-a61c-b8eccdaa792a"). InnerVolumeSpecName "kube-api-access-d25rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.318161 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d25rb\" (UniqueName: \"kubernetes.io/projected/97b07699-ff45-4b3a-a61c-b8eccdaa792a-kube-api-access-d25rb\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.786522 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535346-wgg47" event={"ID":"97b07699-ff45-4b3a-a61c-b8eccdaa792a","Type":"ContainerDied","Data":"9c3564fdc7f72dfc5696e0bec684f35a974859556d51919921b87df3c6c88240"} Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.786594 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3564fdc7f72dfc5696e0bec684f35a974859556d51919921b87df3c6c88240" Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.786622 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535346-wgg47" Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.854393 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535340-ljs2x"] Feb 26 15:46:07 crc kubenswrapper[4809]: I0226 15:46:07.872538 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535340-ljs2x"] Feb 26 15:46:08 crc kubenswrapper[4809]: I0226 15:46:08.276485 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e213e4-4cd6-4bba-bfe8-50fec48b508c" path="/var/lib/kubelet/pods/14e213e4-4cd6-4bba-bfe8-50fec48b508c/volumes" Feb 26 15:46:12 crc kubenswrapper[4809]: I0226 15:46:12.269148 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:46:12 crc kubenswrapper[4809]: E0226 15:46:12.271040 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:46:14 crc kubenswrapper[4809]: I0226 15:46:14.873869 4809 generic.go:334] "Generic (PLEG): container finished" podID="f6ce2764-18f0-4a15-80f9-be57eb532afc" containerID="033170ce8ce944ef7c17cd0ec60ab7afbd6bf80f4be1ec964b3432b86f078288" exitCode=0 Feb 26 15:46:14 crc kubenswrapper[4809]: I0226 15:46:14.873970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" event={"ID":"f6ce2764-18f0-4a15-80f9-be57eb532afc","Type":"ContainerDied","Data":"033170ce8ce944ef7c17cd0ec60ab7afbd6bf80f4be1ec964b3432b86f078288"} Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.010643 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.053059 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-vr2pp"] Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.066404 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-vr2pp"] Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.096829 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host\") pod \"f6ce2764-18f0-4a15-80f9-be57eb532afc\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.096911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjwf2\" (UniqueName: \"kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2\") pod \"f6ce2764-18f0-4a15-80f9-be57eb532afc\" (UID: \"f6ce2764-18f0-4a15-80f9-be57eb532afc\") " Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.096959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host" (OuterVolumeSpecName: "host") pod "f6ce2764-18f0-4a15-80f9-be57eb532afc" (UID: "f6ce2764-18f0-4a15-80f9-be57eb532afc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.097942 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ce2764-18f0-4a15-80f9-be57eb532afc-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.102606 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2" (OuterVolumeSpecName: "kube-api-access-jjwf2") pod "f6ce2764-18f0-4a15-80f9-be57eb532afc" (UID: "f6ce2764-18f0-4a15-80f9-be57eb532afc"). InnerVolumeSpecName "kube-api-access-jjwf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.200743 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjwf2\" (UniqueName: \"kubernetes.io/projected/f6ce2764-18f0-4a15-80f9-be57eb532afc-kube-api-access-jjwf2\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.270368 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" path="/var/lib/kubelet/pods/f6ce2764-18f0-4a15-80f9-be57eb532afc/volumes" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.898504 4809 scope.go:117] "RemoveContainer" containerID="033170ce8ce944ef7c17cd0ec60ab7afbd6bf80f4be1ec964b3432b86f078288" Feb 26 15:46:16 crc kubenswrapper[4809]: I0226 15:46:16.898580 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-vr2pp" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.280078 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-f4zbz"] Feb 26 15:46:17 crc kubenswrapper[4809]: E0226 15:46:17.280871 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" containerName="container-00" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.280886 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" containerName="container-00" Feb 26 15:46:17 crc kubenswrapper[4809]: E0226 15:46:17.280916 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b07699-ff45-4b3a-a61c-b8eccdaa792a" containerName="oc" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.280922 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b07699-ff45-4b3a-a61c-b8eccdaa792a" containerName="oc" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.281143 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b07699-ff45-4b3a-a61c-b8eccdaa792a" containerName="oc" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.281177 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ce2764-18f0-4a15-80f9-be57eb532afc" containerName="container-00" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.282055 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.284515 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwnrr"/"default-dockercfg-jxsvd" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.306481 4809 scope.go:117] "RemoveContainer" containerID="939bead0f98b173455e8bdecd73b548251f0d5e767d7c4abe861288dc297f3ea" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.431374 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bk5\" (UniqueName: \"kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.431494 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.534333 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bk5\" (UniqueName: \"kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.534455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.534621 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.558757 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bk5\" (UniqueName: \"kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5\") pod \"crc-debug-f4zbz\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.600732 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.911449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" event={"ID":"4cbbf622-687f-4384-90d1-04aca54b3b09","Type":"ContainerStarted","Data":"b70bc6943df8a363f48424413d5a642ecb031fd79c02e3d9a5cd5fae24aae7e2"} Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.911770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" event={"ID":"4cbbf622-687f-4384-90d1-04aca54b3b09","Type":"ContainerStarted","Data":"84081233ebe651c8abfa013a4c1f2a4e926f43b4fef900fef9c7f43aba7978b3"} Feb 26 15:46:17 crc kubenswrapper[4809]: I0226 15:46:17.939786 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" podStartSLOduration=0.939762366 podStartE2EDuration="939.762366ms" podCreationTimestamp="2026-02-26 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 15:46:17.924330248 +0000 UTC m=+5556.397650781" watchObservedRunningTime="2026-02-26 15:46:17.939762366 +0000 UTC m=+5556.413082899" Feb 26 15:46:18 crc kubenswrapper[4809]: I0226 15:46:18.922600 4809 generic.go:334] "Generic (PLEG): container finished" podID="4cbbf622-687f-4384-90d1-04aca54b3b09" containerID="b70bc6943df8a363f48424413d5a642ecb031fd79c02e3d9a5cd5fae24aae7e2" exitCode=0 Feb 26 15:46:18 crc kubenswrapper[4809]: I0226 15:46:18.922916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" event={"ID":"4cbbf622-687f-4384-90d1-04aca54b3b09","Type":"ContainerDied","Data":"b70bc6943df8a363f48424413d5a642ecb031fd79c02e3d9a5cd5fae24aae7e2"} Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.070323 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.107313 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-f4zbz"] Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.119559 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-f4zbz"] Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.212408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host\") pod \"4cbbf622-687f-4384-90d1-04aca54b3b09\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.212714 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host" (OuterVolumeSpecName: "host") pod "4cbbf622-687f-4384-90d1-04aca54b3b09" (UID: "4cbbf622-687f-4384-90d1-04aca54b3b09"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.212777 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94bk5\" (UniqueName: \"kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5\") pod \"4cbbf622-687f-4384-90d1-04aca54b3b09\" (UID: \"4cbbf622-687f-4384-90d1-04aca54b3b09\") " Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.213612 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cbbf622-687f-4384-90d1-04aca54b3b09-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.220725 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5" (OuterVolumeSpecName: "kube-api-access-94bk5") pod "4cbbf622-687f-4384-90d1-04aca54b3b09" (UID: "4cbbf622-687f-4384-90d1-04aca54b3b09"). InnerVolumeSpecName "kube-api-access-94bk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.273557 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cbbf622-687f-4384-90d1-04aca54b3b09" path="/var/lib/kubelet/pods/4cbbf622-687f-4384-90d1-04aca54b3b09/volumes" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.316144 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94bk5\" (UniqueName: \"kubernetes.io/projected/4cbbf622-687f-4384-90d1-04aca54b3b09-kube-api-access-94bk5\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.945399 4809 scope.go:117] "RemoveContainer" containerID="b70bc6943df8a363f48424413d5a642ecb031fd79c02e3d9a5cd5fae24aae7e2" Feb 26 15:46:20 crc kubenswrapper[4809]: I0226 15:46:20.945467 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-f4zbz" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.384794 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-hbxgv"] Feb 26 15:46:21 crc kubenswrapper[4809]: E0226 15:46:21.385713 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbbf622-687f-4384-90d1-04aca54b3b09" containerName="container-00" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.385731 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbbf622-687f-4384-90d1-04aca54b3b09" containerName="container-00" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.386083 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbbf622-687f-4384-90d1-04aca54b3b09" containerName="container-00" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.387117 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.390908 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwnrr"/"default-dockercfg-jxsvd" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.549481 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mbc\" (UniqueName: \"kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.549614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.652288 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89mbc\" (UniqueName: \"kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.652429 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.652612 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.679823 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89mbc\" (UniqueName: \"kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc\") pod \"crc-debug-hbxgv\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.709425 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:21 crc kubenswrapper[4809]: I0226 15:46:21.974163 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" event={"ID":"1d8be63f-f181-4898-ad99-a61bc1865f59","Type":"ContainerStarted","Data":"434b74548d45c02597a85a71d603c102974a9de2414e21e5d6141fa5ea226d46"} Feb 26 15:46:23 crc kubenswrapper[4809]: I0226 15:46:23.019778 4809 generic.go:334] "Generic (PLEG): container finished" podID="1d8be63f-f181-4898-ad99-a61bc1865f59" containerID="6fff1bc85303e9c50eb49e22f92f550b2f29c1bdfc203ae3f024528d0e20d3af" exitCode=0 Feb 26 15:46:23 crc kubenswrapper[4809]: I0226 15:46:23.019863 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" event={"ID":"1d8be63f-f181-4898-ad99-a61bc1865f59","Type":"ContainerDied","Data":"6fff1bc85303e9c50eb49e22f92f550b2f29c1bdfc203ae3f024528d0e20d3af"} Feb 26 15:46:23 crc kubenswrapper[4809]: I0226 15:46:23.069352 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-hbxgv"] Feb 26 15:46:23 crc kubenswrapper[4809]: I0226 15:46:23.085599 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwnrr/crc-debug-hbxgv"] Feb 26 15:46:23 crc kubenswrapper[4809]: I0226 15:46:23.257951 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:46:23 crc kubenswrapper[4809]: E0226 15:46:23.258342 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.169043 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.317594 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host\") pod \"1d8be63f-f181-4898-ad99-a61bc1865f59\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.317665 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host" (OuterVolumeSpecName: "host") pod "1d8be63f-f181-4898-ad99-a61bc1865f59" (UID: "1d8be63f-f181-4898-ad99-a61bc1865f59"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.317869 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89mbc\" (UniqueName: \"kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc\") pod \"1d8be63f-f181-4898-ad99-a61bc1865f59\" (UID: \"1d8be63f-f181-4898-ad99-a61bc1865f59\") " Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.318527 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8be63f-f181-4898-ad99-a61bc1865f59-host\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.323445 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc" (OuterVolumeSpecName: "kube-api-access-89mbc") pod "1d8be63f-f181-4898-ad99-a61bc1865f59" (UID: "1d8be63f-f181-4898-ad99-a61bc1865f59"). InnerVolumeSpecName "kube-api-access-89mbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:46:24 crc kubenswrapper[4809]: I0226 15:46:24.421096 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89mbc\" (UniqueName: \"kubernetes.io/projected/1d8be63f-f181-4898-ad99-a61bc1865f59-kube-api-access-89mbc\") on node \"crc\" DevicePath \"\"" Feb 26 15:46:25 crc kubenswrapper[4809]: I0226 15:46:25.045183 4809 scope.go:117] "RemoveContainer" containerID="6fff1bc85303e9c50eb49e22f92f550b2f29c1bdfc203ae3f024528d0e20d3af" Feb 26 15:46:25 crc kubenswrapper[4809]: I0226 15:46:25.045269 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/crc-debug-hbxgv" Feb 26 15:46:26 crc kubenswrapper[4809]: I0226 15:46:26.270287 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8be63f-f181-4898-ad99-a61bc1865f59" path="/var/lib/kubelet/pods/1d8be63f-f181-4898-ad99-a61bc1865f59/volumes" Feb 26 15:46:36 crc kubenswrapper[4809]: I0226 15:46:36.260259 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:46:36 crc kubenswrapper[4809]: E0226 15:46:36.261163 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:46:47 crc kubenswrapper[4809]: I0226 15:46:47.256709 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:46:47 crc kubenswrapper[4809]: E0226 15:46:47.257793 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.454909 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b1c32e51-c938-4ba2-937a-b57e26cfd0a1/aodh-api/0.log" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.689532 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b1c32e51-c938-4ba2-937a-b57e26cfd0a1/aodh-evaluator/0.log" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.705337 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b1c32e51-c938-4ba2-937a-b57e26cfd0a1/aodh-listener/0.log" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.751300 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b1c32e51-c938-4ba2-937a-b57e26cfd0a1/aodh-notifier/0.log" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.904905 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-694b9cc8b4-9gcrr_861702ed-9e3e-4321-bd9e-3059edb13cc3/barbican-api-log/0.log" Feb 26 15:46:55 crc kubenswrapper[4809]: I0226 15:46:55.924752 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-694b9cc8b4-9gcrr_861702ed-9e3e-4321-bd9e-3059edb13cc3/barbican-api/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.078169 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ddb7b7cf6-jq45v_b61dd9b3-075a-46bd-842c-184e5f02d804/barbican-keystone-listener/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.273860 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ddb7b7cf6-jq45v_b61dd9b3-075a-46bd-842c-184e5f02d804/barbican-keystone-listener-log/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.362974 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fcbbfdc9-5v7dg_e87bc3c2-7478-45b4-bd69-5384f71376bd/barbican-worker-log/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.366962 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fcbbfdc9-5v7dg_e87bc3c2-7478-45b4-bd69-5384f71376bd/barbican-worker/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.552994 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jf2mj_ec2d7dc7-59ac-4b40-9a53-6f1a26eceb47/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.705163 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a80c453e-f839-4b12-acd5-c0e59ba4b2cc/ceilometer-central-agent/1.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.825944 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a80c453e-f839-4b12-acd5-c0e59ba4b2cc/ceilometer-notification-agent/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.907970 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a80c453e-f839-4b12-acd5-c0e59ba4b2cc/proxy-httpd/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.919224 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a80c453e-f839-4b12-acd5-c0e59ba4b2cc/ceilometer-central-agent/0.log" Feb 26 15:46:56 crc kubenswrapper[4809]: I0226 15:46:56.972315 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a80c453e-f839-4b12-acd5-c0e59ba4b2cc/sg-core/0.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.209513 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc/cinder-api-log/0.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.253212 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bebfca04-cc2d-44f7-9c0c-bcf0453f8ebc/cinder-api/0.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.700291 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74917c3f-f22d-43b0-9fbf-6473cb9c6c9d/cinder-scheduler/1.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.817917 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74917c3f-f22d-43b0-9fbf-6473cb9c6c9d/cinder-scheduler/0.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.820694 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74917c3f-f22d-43b0-9fbf-6473cb9c6c9d/probe/0.log" Feb 26 15:46:57 crc kubenswrapper[4809]: I0226 15:46:57.934070 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ssmh9_351b60bc-8ad8-4ac3-89bd-27877aeb981e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.177184 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rqvr8_a2154870-7448-40fd-b259-7a0a77cda1ef/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.230979 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-mp2sl_889cb62e-7001-42d1-9e5f-afe69fb0fea0/init/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.448405 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-mp2sl_889cb62e-7001-42d1-9e5f-afe69fb0fea0/init/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.469979 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-mp2sl_889cb62e-7001-42d1-9e5f-afe69fb0fea0/dnsmasq-dns/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.486587 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2mrxm_f33dc9c7-e973-434a-96c2-6712074b3ef8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.777589 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0a901e0d-8105-4ba3-a31f-71ec7e54983f/glance-log/0.log" Feb 26 15:46:58 crc kubenswrapper[4809]: I0226 15:46:58.787866 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0a901e0d-8105-4ba3-a31f-71ec7e54983f/glance-httpd/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.144849 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83/glance-log/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.161969 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2b5c6e5-1510-4f2f-9140-dcc3fc98bf83/glance-httpd/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.489849 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-685c45777-gq64z_e4f61962-0554-496c-9a5f-da2ed271ddd8/heat-api/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.862934 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4b5x2_79e3b79c-2611-4e20-b330-c37740777890/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.960911 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-79989866bd-79zhg_078913dd-883e-474c-bf17-8a5b75aaf507/heat-engine/0.log" Feb 26 15:46:59 crc kubenswrapper[4809]: I0226 15:46:59.976323 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6f7b855898-fb2p7_87e5ac37-643a-4fb7-8dab-d40645ac9dca/heat-cfnapi/0.log" Feb 26 15:47:00 crc kubenswrapper[4809]: I0226 15:47:00.159104 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qsmkm_b34408b8-4589-48e2-b94c-58a98817be4c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:00 crc kubenswrapper[4809]: I0226 15:47:00.455375 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535301-62cv2_7d19d3f1-87e6-4318-be1c-2065f711f4da/keystone-cron/0.log" Feb 26 15:47:00 crc kubenswrapper[4809]: I0226 15:47:00.587269 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0c763bd9-0040-4c8b-996b-e837d320ab67/kube-state-metrics/0.log" Feb 26 15:47:00 crc kubenswrapper[4809]: I0226 15:47:00.845180 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8vngf_38a4f820-36f9-46c4-b55e-bee9f76ddc4b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:00 crc kubenswrapper[4809]: I0226 15:47:00.921371 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-8w8l4_591f9782-8d8e-4f26-9675-a3d7b7b66493/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:01 crc kubenswrapper[4809]: I0226 15:47:01.447924 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b67cbb9bb-wjj2d_44ac06c8-98f3-478a-bfca-6eca9c2fc66b/keystone-api/0.log" Feb 26 15:47:01 crc kubenswrapper[4809]: I0226 15:47:01.457048 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_c3376572-be7f-494e-a652-045bf9fc9f06/mysqld-exporter/0.log" Feb 26 15:47:02 crc kubenswrapper[4809]: I0226 15:47:02.143887 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6455866c87-pbhmh_20736840-781c-4149-9398-481eb42d293b/neutron-httpd/0.log" Feb 26 15:47:02 crc kubenswrapper[4809]: I0226 15:47:02.235633 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6455866c87-pbhmh_20736840-781c-4149-9398-481eb42d293b/neutron-api/0.log" Feb 26 15:47:02 crc kubenswrapper[4809]: I0226 15:47:02.267970 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:47:02 crc kubenswrapper[4809]: E0226 15:47:02.268366 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:47:02 crc kubenswrapper[4809]: I0226 15:47:02.288470 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7fnnm_26907d88-fa6b-43f0-b59a-d8ce3a779fd4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:03 crc kubenswrapper[4809]: I0226 15:47:03.346641 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_535880a7-82d0-47f7-94c1-8c9662d3b32b/nova-cell0-conductor-conductor/0.log" Feb 26 15:47:03 crc kubenswrapper[4809]: I0226 15:47:03.626005 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a8e2401-9bad-4dce-80b6-b76f9b1f07b1/nova-api-log/0.log" Feb 26 15:47:03 crc kubenswrapper[4809]: I0226 15:47:03.741708 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5bb3781f-0618-426b-a950-2edc6c6e9317/nova-cell1-conductor-conductor/0.log" Feb 26 15:47:03 crc kubenswrapper[4809]: I0226 15:47:03.994077 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a8e2401-9bad-4dce-80b6-b76f9b1f07b1/nova-api-api/0.log" Feb 26 15:47:04 crc kubenswrapper[4809]: I0226 15:47:04.433865 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x2sls_a34f2251-97ea-4dc9-a640-1b3e489d7957/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:04 crc kubenswrapper[4809]: I0226 15:47:04.449647 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d7e7790d-ec12-4f49-acf4-cf7c9b8680c2/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 15:47:04 crc kubenswrapper[4809]: I0226 15:47:04.889072 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4e66982-31ee-45ee-9e2f-60fb4d8e24fe/nova-metadata-log/0.log" Feb 26 15:47:05 crc kubenswrapper[4809]: I0226 15:47:05.049789 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3a1d46ba-37f0-43d7-94a0-bea208549a22/nova-scheduler-scheduler/0.log" Feb 26 15:47:05 crc kubenswrapper[4809]: I0226 15:47:05.217676 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4f21dca-3b2f-4818-8356-1de8cfbbc261/mysql-bootstrap/0.log" Feb 26 15:47:05 crc kubenswrapper[4809]: I0226 15:47:05.708612 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4f21dca-3b2f-4818-8356-1de8cfbbc261/mysql-bootstrap/0.log" Feb 26 15:47:05 crc kubenswrapper[4809]: I0226 15:47:05.715913 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4f21dca-3b2f-4818-8356-1de8cfbbc261/galera/1.log" Feb 26 15:47:05 crc kubenswrapper[4809]: I0226 15:47:05.721653 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4f21dca-3b2f-4818-8356-1de8cfbbc261/galera/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.041380 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b25b5c98-b424-41ce-b099-876b266cf2be/mysql-bootstrap/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.319200 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b25b5c98-b424-41ce-b099-876b266cf2be/galera/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.331872 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b25b5c98-b424-41ce-b099-876b266cf2be/mysql-bootstrap/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.374967 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b25b5c98-b424-41ce-b099-876b266cf2be/galera/1.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.583066 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a79dbedd-3475-4279-9c37-9add895fd0e1/openstackclient/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.938450 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4e66982-31ee-45ee-9e2f-60fb4d8e24fe/nova-metadata-metadata/0.log" Feb 26 15:47:06 crc kubenswrapper[4809]: I0226 15:47:06.954645 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mctbl_3807a00a-1120-4344-9a7b-6522b0f3099b/ovn-controller/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.098161 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-68qnw_ce5afc58-7519-4c58-97e2-467468246721/openstack-network-exporter/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.215756 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bld69_86ac277e-d27e-4d56-b145-244a494765fb/ovsdb-server-init/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.492199 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bld69_86ac277e-d27e-4d56-b145-244a494765fb/ovs-vswitchd/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.511939 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bld69_86ac277e-d27e-4d56-b145-244a494765fb/ovsdb-server/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.561983 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bld69_86ac277e-d27e-4d56-b145-244a494765fb/ovsdb-server-init/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.732379 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ltx82_b3447f7c-8de1-42d8-8f51-9d78062f6dd3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.775413 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e080a660-5ea2-479a-981c-d82d1b547d04/openstack-network-exporter/0.log" Feb 26 15:47:07 crc kubenswrapper[4809]: I0226 15:47:07.993699 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e080a660-5ea2-479a-981c-d82d1b547d04/ovn-northd/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.050546 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c3e7cf46-b165-4cb9-9249-286b9ef0a2c4/openstack-network-exporter/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.082631 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c3e7cf46-b165-4cb9-9249-286b9ef0a2c4/ovsdbserver-nb/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.229938 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a357022d-35fa-453a-82df-d4726ce47a6a/openstack-network-exporter/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.276935 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a357022d-35fa-453a-82df-d4726ce47a6a/ovsdbserver-sb/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.641120 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c65dd4586-db9jl_79c81db6-7044-4d85-9680-bb4744af4cba/placement-api/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.644911 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c65dd4586-db9jl_79c81db6-7044-4d85-9680-bb4744af4cba/placement-log/0.log" Feb 26 15:47:08 crc kubenswrapper[4809]: I0226 15:47:08.778905 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5b487ff7-ff62-4570-a75c-314514fb7496/init-config-reloader/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.036590 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5b487ff7-ff62-4570-a75c-314514fb7496/config-reloader/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.047748 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5b487ff7-ff62-4570-a75c-314514fb7496/thanos-sidecar/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.051702 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5b487ff7-ff62-4570-a75c-314514fb7496/init-config-reloader/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.057704 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5b487ff7-ff62-4570-a75c-314514fb7496/prometheus/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.301888 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7fadb9f7-5f45-40bb-a288-8332be9f3c10/setup-container/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.517720 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7fadb9f7-5f45-40bb-a288-8332be9f3c10/setup-container/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.580370 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7fadb9f7-5f45-40bb-a288-8332be9f3c10/rabbitmq/0.log" Feb 26 15:47:09 crc kubenswrapper[4809]: I0226 15:47:09.670311 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_20d71c1c-fd94-4fd4-b4b7-fd776b33e715/setup-container/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.160593 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_20d71c1c-fd94-4fd4-b4b7-fd776b33e715/setup-container/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.200908 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_20d71c1c-fd94-4fd4-b4b7-fd776b33e715/rabbitmq/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.320540 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_44f15062-69d5-4f5c-a51c-3c0f75700b52/setup-container/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.583164 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_44f15062-69d5-4f5c-a51c-3c0f75700b52/rabbitmq/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.613156 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_44f15062-69d5-4f5c-a51c-3c0f75700b52/setup-container/0.log" Feb 26 15:47:10 crc kubenswrapper[4809]: I0226 15:47:10.625850 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f1b541d8-7c08-42e8-831b-6e3d7262277a/setup-container/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.016475 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f1b541d8-7c08-42e8-831b-6e3d7262277a/setup-container/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.057644 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h9g8z_ecbd5645-f7e4-4741-9042-5d1db68de941/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.118713 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f1b541d8-7c08-42e8-831b-6e3d7262277a/rabbitmq/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.336485 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-n6wjz_83288dad-14b0-4e58-b07f-4006eddbbfe6/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.476718 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wxdqm_b26ec76a-b3e0-4564-a225-0f7fe176f3e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:11 crc kubenswrapper[4809]: I0226 15:47:11.638207 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b2h84_c491f7d7-7607-4605-b5fb-312493e0bebf/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:12 crc kubenswrapper[4809]: I0226 15:47:12.411742 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mnhs8_9f8e2003-a428-496b-b735-9d4e242712a9/ssh-known-hosts-edpm-deployment/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.104499 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-f84fv_9ec870c5-5d62-422e-bbd4-d130b152e60a/swift-ring-rebalance/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.141606 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cf69889d9-nqp5q_dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3/proxy-server/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.185243 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cf69889d9-nqp5q_dd4d3fe9-350f-4fc3-8cd2-6ea95162a0d3/proxy-httpd/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.457295 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/account-replicator/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.459814 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/account-auditor/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.482556 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/account-reaper/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.689746 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/account-server/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.704539 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/container-auditor/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.802037 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/container-replicator/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.835731 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/container-server/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.952046 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/container-updater/0.log" Feb 26 15:47:13 crc kubenswrapper[4809]: I0226 15:47:13.983598 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/object-auditor/0.log" Feb 26 15:47:14 crc kubenswrapper[4809]: I0226 15:47:14.657511 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/object-server/0.log" Feb 26 15:47:14 crc kubenswrapper[4809]: I0226 15:47:14.661269 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/object-expirer/0.log" Feb 26 15:47:14 crc kubenswrapper[4809]: I0226 15:47:14.671926 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/object-replicator/0.log" Feb 26 15:47:14 crc kubenswrapper[4809]: I0226 15:47:14.729207 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/object-updater/0.log" Feb 26 15:47:14 crc kubenswrapper[4809]: I0226 15:47:14.992410 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/swift-recon-cron/0.log" Feb 26 15:47:15 crc kubenswrapper[4809]: I0226 15:47:15.040432 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_48507eec-5e23-465d-bf31-73a90acd8e73/rsync/0.log" Feb 26 15:47:15 crc kubenswrapper[4809]: I0226 15:47:15.095336 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-brb47_0a1d6e3c-8131-4221-bbfa-b50c54318c94/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:15 crc kubenswrapper[4809]: I0226 15:47:15.560198 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-9ghcf_29b6dce3-2861-435e-982c-63bdc94b4dca/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:15 crc kubenswrapper[4809]: I0226 15:47:15.807157 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b12561a8-3ff6-42aa-81ae-a7a8b304c6c0/test-operator-logs-container/0.log" Feb 26 15:47:15 crc kubenswrapper[4809]: I0226 15:47:15.857539 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5dc4d14b-07db-462f-9fb4-8a00eb3452be/tempest-tests-tempest-tests-runner/0.log" Feb 26 15:47:16 crc kubenswrapper[4809]: I0226 15:47:16.022461 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bz9gl_9fb723f4-b0eb-4520-a602-c723a935d0c6/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 15:47:17 crc kubenswrapper[4809]: I0226 15:47:17.256736 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:47:17 crc kubenswrapper[4809]: E0226 15:47:17.257368 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:47:22 crc kubenswrapper[4809]: I0226 15:47:22.079897 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3ceedead-6111-4ca8-b2ef-c97e503513eb/memcached/0.log" Feb 26 15:47:31 crc kubenswrapper[4809]: I0226 15:47:31.256686 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:47:31 crc kubenswrapper[4809]: E0226 15:47:31.257531 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:47:42 crc kubenswrapper[4809]: I0226 15:47:42.266531 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:47:42 crc kubenswrapper[4809]: E0226 15:47:42.267374 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:47:49 crc kubenswrapper[4809]: I0226 15:47:49.631441 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/util/0.log" Feb 26 15:47:49 crc kubenswrapper[4809]: I0226 15:47:49.951104 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/pull/0.log" Feb 26 15:47:49 crc kubenswrapper[4809]: I0226 15:47:49.969616 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/util/0.log" Feb 26 15:47:49 crc kubenswrapper[4809]: I0226 15:47:49.970660 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/pull/0.log" Feb 26 15:47:50 crc kubenswrapper[4809]: I0226 15:47:50.245780 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/pull/0.log" Feb 26 15:47:50 crc kubenswrapper[4809]: I0226 15:47:50.292058 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/util/0.log" Feb 26 15:47:50 crc kubenswrapper[4809]: I0226 15:47:50.305326 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_74090c24a9067af8d913a34cef30b557bc94234c52e2df6a051e0eeaf2k9xc7_c84e8dc8-cb82-4203-9e89-56e191b7e072/extract/0.log" Feb 26 15:47:50 crc kubenswrapper[4809]: I0226 15:47:50.845269 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-c9sm7_369ebb20-08ea-4aa4-ba33-8eecc4a208ca/manager/0.log" Feb 26 15:47:51 crc kubenswrapper[4809]: I0226 15:47:51.252459 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-r946b_452db9cf-1689-42fa-bd48-15be5d5012e4/manager/0.log" Feb 26 15:47:52 crc kubenswrapper[4809]: I0226 15:47:52.010571 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-htlkr_a94df460-1916-4302-a528-1850277c2c68/manager/0.log" Feb 26 15:47:52 crc kubenswrapper[4809]: I0226 15:47:52.271841 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-psj8j_b891860e-25ba-48f0-90f1-a9f481e661eb/manager/0.log" Feb 26 15:47:52 crc kubenswrapper[4809]: I0226 15:47:52.595749 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mvll2_2130e114-53fd-4853-bd3a-df26c1c3df4a/manager/1.log" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.146371 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-tlnl6_ed2539dd-3109-42bf-9c5b-aee680db3b4f/manager/0.log" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.257043 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:47:53 crc kubenswrapper[4809]: E0226 15:47:53.257408 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.373835 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mvll2_2130e114-53fd-4853-bd3a-df26c1c3df4a/manager/0.log" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.767314 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-wmn76_51190e04-2cb1-41e9-9d62-23ef12d0edd3/manager/0.log" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.780684 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-vb9br_7a66a093-3f9f-49a8-a45b-84aef0465d4e/manager/0.log" Feb 26 15:47:53 crc kubenswrapper[4809]: I0226 15:47:53.804349 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-qnxhv_e0ea19c8-d5ec-445b-bf27-e8fe1c6397ae/manager/0.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.088259 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-wbnwh_f88f4170-586f-4203-8c9b-12aa0865a6be/manager/0.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.152928 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-llxf9_d05f3883-4b90-4b5d-94b2-b7e916a66ed6/manager/0.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.271110 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jsjcz_00bdb1ef-c56b-4abe-b491-9c24a8f9089d/manager/1.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.425402 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jsjcz_00bdb1ef-c56b-4abe-b491-9c24a8f9089d/manager/0.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.476911 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-xcrth_db50276a-5e85-4edb-9538-0b42201fbe74/manager/1.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.517700 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-xcrth_db50276a-5e85-4edb-9538-0b42201fbe74/manager/0.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.652849 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6_ed3d7dc0-026c-4ed5-b816-b0249300c743/manager/1.log" Feb 26 15:47:54 crc kubenswrapper[4809]: I0226 15:47:54.786650 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cp57l6_ed3d7dc0-026c-4ed5-b816-b0249300c743/manager/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.170345 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bmlld_f06c5375-eeef-461b-9dce-048a10de5770/registry-server/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.249318 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fd648b64f-xrqvp_bb2dfdc8-ecf9-4fdc-ad4a-5e239b49d00c/operator/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.480440 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-b24kw_957002f1-5ca4-484b-b664-b7b563257915/manager/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.572576 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-25wlm_6f049af1-526c-496e-a9af-4066b69ed359/manager/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.744234 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7pkzl_1aebc8ba-eb1d-49a1-843b-3634bbbd4556/operator/1.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.791795 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7pkzl_1aebc8ba-eb1d-49a1-843b-3634bbbd4556/operator/0.log" Feb 26 15:47:55 crc kubenswrapper[4809]: I0226 15:47:55.980259 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-f6n9h_15986ded-5e26-4bcc-bf72-ee349431961a/manager/0.log" Feb 26 15:47:56 crc kubenswrapper[4809]: I0226 15:47:56.310371 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-w2zff_2c068d1c-3f6c-49a3-bf65-d29b68c5ad11/manager/1.log" Feb 26 15:47:56 crc kubenswrapper[4809]: I0226 15:47:56.462790 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-w2zff_2c068d1c-3f6c-49a3-bf65-d29b68c5ad11/manager/0.log" Feb 26 15:47:56 crc kubenswrapper[4809]: I0226 15:47:56.710916 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-dhrj6_5ba0d806-2bcd-45f1-b529-36ed243d775b/manager/0.log" Feb 26 15:47:56 crc kubenswrapper[4809]: I0226 15:47:56.852888 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-57dc789b66-zjvhb_2b26231b-2e6e-4484-8014-6dcf40d06f40/manager/0.log" Feb 26 15:47:57 crc kubenswrapper[4809]: I0226 15:47:57.571112 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fc9897686-rt5g8_3e30fc60-012b-4a56-9cf0-56ff13e835d4/manager/0.log" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.168747 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535348-9qgm6"] Feb 26 15:48:00 crc kubenswrapper[4809]: E0226 15:48:00.169825 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8be63f-f181-4898-ad99-a61bc1865f59" containerName="container-00" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.169841 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8be63f-f181-4898-ad99-a61bc1865f59" containerName="container-00" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.170086 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8be63f-f181-4898-ad99-a61bc1865f59" containerName="container-00" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.170985 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.174238 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.174512 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.179395 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.200983 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535348-9qgm6"] Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.213093 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvcw\" (UniqueName: \"kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw\") pod \"auto-csr-approver-29535348-9qgm6\" (UID: \"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e\") " pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.315322 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvcw\" (UniqueName: \"kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw\") pod \"auto-csr-approver-29535348-9qgm6\" (UID: \"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e\") " pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.342747 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvcw\" (UniqueName: \"kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw\") pod \"auto-csr-approver-29535348-9qgm6\" (UID: \"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e\") " pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:00 crc kubenswrapper[4809]: I0226 15:48:00.497725 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:01 crc kubenswrapper[4809]: I0226 15:48:01.139876 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535348-9qgm6"] Feb 26 15:48:01 crc kubenswrapper[4809]: I0226 15:48:01.262052 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" event={"ID":"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e","Type":"ContainerStarted","Data":"3323430879de7f263a2a79de61eeeca68e30c5e3d1ac1abd57ae5bd6cab712ed"} Feb 26 15:48:02 crc kubenswrapper[4809]: I0226 15:48:02.738695 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:48:02 crc kubenswrapper[4809]: I0226 15:48:02.743667 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a4f21dca-3b2f-4818-8356-1de8cfbbc261" containerName="galera" probeResult="failure" output="command timed out" Feb 26 15:48:04 crc kubenswrapper[4809]: I0226 15:48:04.215804 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-7ptff_0dc90358-78e1-4391-9b04-72fb1a0ffb6e/manager/0.log" Feb 26 15:48:04 crc kubenswrapper[4809]: I0226 15:48:04.310753 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" event={"ID":"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e","Type":"ContainerStarted","Data":"2468b380823ce4803aeec1682408d0f30a9f371e494c014e9118ed5c7e830bea"} Feb 26 15:48:04 crc kubenswrapper[4809]: I0226 15:48:04.333246 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" podStartSLOduration=2.389003502 podStartE2EDuration="4.333222583s" podCreationTimestamp="2026-02-26 15:48:00 +0000 UTC" firstStartedPulling="2026-02-26 15:48:01.137353815 +0000 UTC m=+5659.610674338" lastFinishedPulling="2026-02-26 15:48:03.081572896 +0000 UTC m=+5661.554893419" observedRunningTime="2026-02-26 15:48:04.321721266 +0000 UTC m=+5662.795041789" watchObservedRunningTime="2026-02-26 15:48:04.333222583 +0000 UTC m=+5662.806543106" Feb 26 15:48:05 crc kubenswrapper[4809]: I0226 15:48:05.257535 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:48:05 crc kubenswrapper[4809]: E0226 15:48:05.258365 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:48:06 crc kubenswrapper[4809]: I0226 15:48:06.336745 4809 generic.go:334] "Generic (PLEG): container finished" podID="41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" containerID="2468b380823ce4803aeec1682408d0f30a9f371e494c014e9118ed5c7e830bea" exitCode=0 Feb 26 15:48:06 crc kubenswrapper[4809]: I0226 15:48:06.336829 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" event={"ID":"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e","Type":"ContainerDied","Data":"2468b380823ce4803aeec1682408d0f30a9f371e494c014e9118ed5c7e830bea"} Feb 26 15:48:07 crc kubenswrapper[4809]: I0226 15:48:07.830465 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:07 crc kubenswrapper[4809]: I0226 15:48:07.928223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nvcw\" (UniqueName: \"kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw\") pod \"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e\" (UID: \"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e\") " Feb 26 15:48:07 crc kubenswrapper[4809]: I0226 15:48:07.935234 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw" (OuterVolumeSpecName: "kube-api-access-4nvcw") pod "41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" (UID: "41d1a6b3-fb07-4247-9ac3-3668cbd08b5e"). InnerVolumeSpecName "kube-api-access-4nvcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.032680 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nvcw\" (UniqueName: \"kubernetes.io/projected/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e-kube-api-access-4nvcw\") on node \"crc\" DevicePath \"\"" Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.357728 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" event={"ID":"41d1a6b3-fb07-4247-9ac3-3668cbd08b5e","Type":"ContainerDied","Data":"3323430879de7f263a2a79de61eeeca68e30c5e3d1ac1abd57ae5bd6cab712ed"} Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.357776 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3323430879de7f263a2a79de61eeeca68e30c5e3d1ac1abd57ae5bd6cab712ed" Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.357777 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535348-9qgm6" Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.423951 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535342-lqss6"] Feb 26 15:48:08 crc kubenswrapper[4809]: I0226 15:48:08.436619 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535342-lqss6"] Feb 26 15:48:10 crc kubenswrapper[4809]: I0226 15:48:10.271168 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01413d5e-56a4-4b5b-9f12-7baad5eb2c02" path="/var/lib/kubelet/pods/01413d5e-56a4-4b5b-9f12-7baad5eb2c02/volumes" Feb 26 15:48:18 crc kubenswrapper[4809]: I0226 15:48:18.258740 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:48:18 crc kubenswrapper[4809]: I0226 15:48:18.543313 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f"} Feb 26 15:48:20 crc kubenswrapper[4809]: I0226 15:48:20.199857 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gpx9n_0cf0a064-9313-441b-9ab2-19a3b64ec281/control-plane-machine-set-operator/0.log" Feb 26 15:48:20 crc kubenswrapper[4809]: I0226 15:48:20.425997 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5c5f4_63ad35a4-2e13-46d4-9404-690ffddd919e/kube-rbac-proxy/0.log" Feb 26 15:48:20 crc kubenswrapper[4809]: I0226 15:48:20.463151 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5c5f4_63ad35a4-2e13-46d4-9404-690ffddd919e/machine-api-operator/0.log" Feb 26 15:48:33 crc kubenswrapper[4809]: I0226 15:48:33.347257 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-w4nlj_379471df-cfa1-4a81-893b-f00d1ef56738/cert-manager-controller/0.log" Feb 26 15:48:33 crc kubenswrapper[4809]: I0226 15:48:33.547203 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lhqrt_8cdb2a93-aaed-4598-b78d-c8ba2a452c77/cert-manager-cainjector/0.log" Feb 26 15:48:33 crc kubenswrapper[4809]: I0226 15:48:33.595249 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bk5r6_bd634336-09f5-4412-a619-3c59838d89c6/cert-manager-webhook/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.064104 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-sz7st_0b68ecb6-4527-43b3-9383-605a44c377a4/nmstate-console-plugin/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.214088 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jz4x7_4ce72366-e1aa-4a1a-ae00-1ff3e592c4df/nmstate-handler/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.288270 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-8gnsv_665daf18-37e2-42cb-9d28-671eed0de9ae/kube-rbac-proxy/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.290392 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-8gnsv_665daf18-37e2-42cb-9d28-671eed0de9ae/nmstate-metrics/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.434968 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-r7rl4_77f7460a-7462-42ea-8dd6-32340fc3c453/nmstate-operator/0.log" Feb 26 15:48:47 crc kubenswrapper[4809]: I0226 15:48:47.487468 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-5m958_ca66b8d2-b108-4e52-a6f0-0f05d6fd4f82/nmstate-webhook/0.log" Feb 26 15:49:01 crc kubenswrapper[4809]: I0226 15:49:01.470400 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/kube-rbac-proxy/0.log" Feb 26 15:49:01 crc kubenswrapper[4809]: I0226 15:49:01.504795 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/manager/1.log" Feb 26 15:49:01 crc kubenswrapper[4809]: I0226 15:49:01.722284 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/manager/0.log" Feb 26 15:49:17 crc kubenswrapper[4809]: I0226 15:49:17.173556 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h5gk4_87348b90-199e-442d-a9ec-263588a8cc54/prometheus-operator/0.log" Feb 26 15:49:17 crc kubenswrapper[4809]: I0226 15:49:17.410846 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_b56a5ce7-761a-410a-84e8-41e01ad2b55e/prometheus-operator-admission-webhook/0.log" Feb 26 15:49:17 crc kubenswrapper[4809]: I0226 15:49:17.412310 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_906a26fc-9fb3-4964-8c39-ef42e4915be5/prometheus-operator-admission-webhook/0.log" Feb 26 15:49:17 crc kubenswrapper[4809]: I0226 15:49:17.502709 4809 scope.go:117] "RemoveContainer" containerID="2d1950704410befa5899bbf1b31ae759cca28679bcedc8061b93cfb4baa06572" Feb 26 15:49:18 crc kubenswrapper[4809]: I0226 15:49:18.409973 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qq6nr_cc062236-67aa-4219-8e13-45ff2cf44f8e/operator/0.log" Feb 26 15:49:18 crc kubenswrapper[4809]: I0226 15:49:18.427480 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vxfq4_a1694e2c-b193-496d-b2df-d4c8857e2cc2/observability-ui-dashboards/0.log" Feb 26 15:49:18 crc kubenswrapper[4809]: I0226 15:49:18.593749 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9tgqx_bb918b49-7bc0-40e4-b7a7-a4ab671e7911/perses-operator/0.log" Feb 26 15:49:36 crc kubenswrapper[4809]: I0226 15:49:36.519119 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-rb6br_1926a0b0-d825-4666-af1b-dcf70edde6e5/cluster-logging-operator/0.log" Feb 26 15:49:36 crc kubenswrapper[4809]: I0226 15:49:36.800338 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-bdmnm_7c5a6be3-a564-4d18-a311-854ab5e8804e/collector/0.log" Feb 26 15:49:36 crc kubenswrapper[4809]: I0226 15:49:36.822000 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_19265028-6636-400d-9803-4b7cbcf14758/loki-compactor/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.020345 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-lllg8_6dde47f1-266b-4f13-978b-26ff224139e9/loki-distributor/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.085939 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-568bb59667-ctm8g_b1dab503-8599-4066-85b7-86c389ed7748/gateway/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.318752 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-568bb59667-znjxl_1fc6d9b6-52bd-409c-afa9-693fbe42fb7c/gateway/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.355748 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-568bb59667-ctm8g_b1dab503-8599-4066-85b7-86c389ed7748/opa/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.416767 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-568bb59667-znjxl_1fc6d9b6-52bd-409c-afa9-693fbe42fb7c/opa/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.604004 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7d913002-7509-40a2-9de5-3efb1c774a56/loki-index-gateway/0.log" Feb 26 15:49:37 crc kubenswrapper[4809]: I0226 15:49:37.824896 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_05cda7c6-2dff-46e8-9622-6dda35865e97/loki-ingester/0.log" Feb 26 15:49:38 crc kubenswrapper[4809]: I0226 15:49:38.017606 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-d8m5w_d1f96f50-c096-4107-9fe1-351bb6b20d57/loki-querier/0.log" Feb 26 15:49:38 crc kubenswrapper[4809]: I0226 15:49:38.167746 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-nv5fd_9a7bcc4d-3a79-4727-bf5e-e96d028fa950/loki-query-frontend/0.log" Feb 26 15:49:55 crc kubenswrapper[4809]: I0226 15:49:55.480369 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v7drb_cd58f297-8233-45a5-8bd4-04621d1e1750/kube-rbac-proxy/0.log" Feb 26 15:49:55 crc kubenswrapper[4809]: I0226 15:49:55.750479 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v7drb_cd58f297-8233-45a5-8bd4-04621d1e1750/controller/0.log" Feb 26 15:49:55 crc kubenswrapper[4809]: I0226 15:49:55.761408 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-pb8s6_629e9f19-72e1-497b-a156-51a0ed359d4c/frr-k8s-webhook-server/0.log" Feb 26 15:49:55 crc kubenswrapper[4809]: I0226 15:49:55.974046 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-frr-files/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.137143 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-reloader/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.154761 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-frr-files/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.174788 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-metrics/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.283751 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-reloader/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.394628 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-frr-files/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.439408 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-reloader/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.449431 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-metrics/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.541749 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-metrics/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.730982 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-frr-files/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.764417 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-metrics/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.768946 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/cp-reloader/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.841727 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/controller/0.log" Feb 26 15:49:56 crc kubenswrapper[4809]: I0226 15:49:56.953423 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/frr-metrics/0.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.070729 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/kube-rbac-proxy/0.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.127742 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/kube-rbac-proxy-frr/0.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.227120 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/reloader/0.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.369875 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78c95b4464-fclfm_19bdfc76-4c2f-4ef8-890e-84d3a6f5b895/manager/1.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.592522 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6fc554dcbc-rqcfb_ba3c9bcd-2859-4815-ba37-d6337eb78ec1/webhook-server/1.log" Feb 26 15:49:57 crc kubenswrapper[4809]: I0226 15:49:57.869152 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78c95b4464-fclfm_19bdfc76-4c2f-4ef8-890e-84d3a6f5b895/manager/0.log" Feb 26 15:49:58 crc kubenswrapper[4809]: I0226 15:49:58.086274 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6fc554dcbc-rqcfb_ba3c9bcd-2859-4815-ba37-d6337eb78ec1/webhook-server/0.log" Feb 26 15:49:58 crc kubenswrapper[4809]: I0226 15:49:58.270219 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kwnwr_e778875f-43d9-4ab5-9e0c-e561a3d4bd2f/kube-rbac-proxy/0.log" Feb 26 15:49:59 crc kubenswrapper[4809]: I0226 15:49:59.332713 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpd62_a0457c9d-5a38-464b-92ca-da334aae1915/frr/0.log" Feb 26 15:49:59 crc kubenswrapper[4809]: I0226 15:49:59.339352 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kwnwr_e778875f-43d9-4ab5-9e0c-e561a3d4bd2f/speaker/0.log" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.154566 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535350-jr4m6"] Feb 26 15:50:00 crc kubenswrapper[4809]: E0226 15:50:00.155153 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" containerName="oc" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.155172 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" containerName="oc" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.155459 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" containerName="oc" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.156366 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.160000 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.160229 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.163444 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.166894 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535350-jr4m6"] Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.326234 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckd5\" (UniqueName: \"kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5\") pod \"auto-csr-approver-29535350-jr4m6\" (UID: \"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf\") " pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.429119 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckd5\" (UniqueName: \"kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5\") pod \"auto-csr-approver-29535350-jr4m6\" (UID: \"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf\") " pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.448875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckd5\" (UniqueName: \"kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5\") pod \"auto-csr-approver-29535350-jr4m6\" (UID: \"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf\") " pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:00 crc kubenswrapper[4809]: I0226 15:50:00.475070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:01 crc kubenswrapper[4809]: I0226 15:50:01.008659 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535350-jr4m6"] Feb 26 15:50:01 crc kubenswrapper[4809]: I0226 15:50:01.018897 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:50:01 crc kubenswrapper[4809]: I0226 15:50:01.863413 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" event={"ID":"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf","Type":"ContainerStarted","Data":"890aea6b2ef14381b0dc32bfb7a919167e0e273f09a26231616b573c89519569"} Feb 26 15:50:06 crc kubenswrapper[4809]: I0226 15:50:06.917799 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" event={"ID":"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf","Type":"ContainerStarted","Data":"ae486067bd8678fdad26dc4e6bb8a13025e24715b95b155a47d810695c130aa8"} Feb 26 15:50:06 crc kubenswrapper[4809]: I0226 15:50:06.955339 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" podStartSLOduration=2.218171919 podStartE2EDuration="6.955310911s" podCreationTimestamp="2026-02-26 15:50:00 +0000 UTC" firstStartedPulling="2026-02-26 15:50:01.01644441 +0000 UTC m=+5779.489764943" lastFinishedPulling="2026-02-26 15:50:05.753583392 +0000 UTC m=+5784.226903935" observedRunningTime="2026-02-26 15:50:06.933882622 +0000 UTC m=+5785.407203145" watchObservedRunningTime="2026-02-26 15:50:06.955310911 +0000 UTC m=+5785.428631444" Feb 26 15:50:08 crc kubenswrapper[4809]: I0226 15:50:08.942584 4809 generic.go:334] "Generic (PLEG): container finished" podID="6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" containerID="ae486067bd8678fdad26dc4e6bb8a13025e24715b95b155a47d810695c130aa8" exitCode=0 Feb 26 15:50:08 crc kubenswrapper[4809]: I0226 15:50:08.942681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" event={"ID":"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf","Type":"ContainerDied","Data":"ae486067bd8678fdad26dc4e6bb8a13025e24715b95b155a47d810695c130aa8"} Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.381735 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.499314 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckd5\" (UniqueName: \"kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5\") pod \"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf\" (UID: \"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf\") " Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.505999 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5" (OuterVolumeSpecName: "kube-api-access-mckd5") pod "6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" (UID: "6847cdd0-5d36-47b9-b7a4-41d0be68a1cf"). InnerVolumeSpecName "kube-api-access-mckd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.603395 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckd5\" (UniqueName: \"kubernetes.io/projected/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf-kube-api-access-mckd5\") on node \"crc\" DevicePath \"\"" Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.965311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" event={"ID":"6847cdd0-5d36-47b9-b7a4-41d0be68a1cf","Type":"ContainerDied","Data":"890aea6b2ef14381b0dc32bfb7a919167e0e273f09a26231616b573c89519569"} Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.965542 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890aea6b2ef14381b0dc32bfb7a919167e0e273f09a26231616b573c89519569" Feb 26 15:50:10 crc kubenswrapper[4809]: I0226 15:50:10.965406 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535350-jr4m6" Feb 26 15:50:11 crc kubenswrapper[4809]: I0226 15:50:11.039134 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535344-r84gt"] Feb 26 15:50:11 crc kubenswrapper[4809]: I0226 15:50:11.053237 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535344-r84gt"] Feb 26 15:50:13 crc kubenswrapper[4809]: I0226 15:50:13.173359 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b2e435-d27d-443e-81b6-59a4260eea4d" path="/var/lib/kubelet/pods/63b2e435-d27d-443e-81b6-59a4260eea4d/volumes" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.229633 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/util/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.530629 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/util/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.570825 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/pull/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.623690 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/pull/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.804660 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/pull/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.811270 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/util/0.log" Feb 26 15:50:16 crc kubenswrapper[4809]: I0226 15:50:16.833172 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a822fc29_a4db5e64-e72e-41b4-ad05-d1ed0ffdcdf7/extract/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.323991 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/util/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.611431 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/util/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.640476 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/pull/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.663050 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/pull/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.800958 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/util/0.log" Feb 26 15:50:17 crc kubenswrapper[4809]: I0226 15:50:17.928056 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/pull/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.032786 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/util/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.034780 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19xr2fv_5ca418d2-a956-4ec3-95a0-9f69dea10a9f/extract/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.215240 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/util/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.223350 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/pull/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.260669 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/pull/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.296790 4809 scope.go:117] "RemoveContainer" containerID="846f97919824b12b2c76464e3cf1afb84c5c3c250d60bb32812c2a4a3e61a411" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.477064 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/util/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.529862 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/pull/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.579229 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zsjg_0fb66480-ee41-4b31-a0c8-3c0acc10701b/extract/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.718726 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-utilities/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.919928 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-utilities/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.972871 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-content/0.log" Feb 26 15:50:18 crc kubenswrapper[4809]: I0226 15:50:18.976781 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-content/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.155652 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-utilities/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.189450 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/extract-content/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.514891 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-utilities/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.731331 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-utilities/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.782455 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-content/0.log" Feb 26 15:50:19 crc kubenswrapper[4809]: I0226 15:50:19.883426 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-content/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.122761 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-utilities/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.129452 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/extract-content/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.136006 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvqmb_5863bb93-7ab4-4326-b1fa-e4f1d5d920e2/registry-server/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.452278 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/util/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.692687 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/util/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.751400 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/pull/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.751536 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/pull/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.995234 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/util/0.log" Feb 26 15:50:20 crc kubenswrapper[4809]: I0226 15:50:20.997156 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/pull/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.086418 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4qq9f7_a9e32e0e-6f30-4d37-b75d-cff50247395f/extract/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.257898 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/util/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.270626 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2jjnr_e3b1e666-52f7-42ab-bf72-d47a823ab2fd/registry-server/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.485596 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/pull/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.495181 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/util/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.495941 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/pull/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.715935 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/util/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.716326 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/extract/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.724826 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989mrb7c_60de68b5-ae89-4301-a77c-9d52379551e1/pull/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.792605 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cn6jt_75ed42a0-23bb-4422-bdde-87edffef1c8a/marketplace-operator/0.log" Feb 26 15:50:21 crc kubenswrapper[4809]: I0226 15:50:21.919712 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-utilities/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.051726 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-utilities/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.091901 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-content/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.129049 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-content/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.305761 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-utilities/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.333452 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/extract-content/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.407310 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-utilities/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.515832 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mm4b_45178ad4-29b4-4221-ab5f-8d2c6a9a92d2/registry-server/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.576968 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-content/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.585222 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-utilities/0.log" Feb 26 15:50:22 crc kubenswrapper[4809]: I0226 15:50:22.591486 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-content/0.log" Feb 26 15:50:23 crc kubenswrapper[4809]: I0226 15:50:23.015144 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-utilities/0.log" Feb 26 15:50:23 crc kubenswrapper[4809]: I0226 15:50:23.017666 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/extract-content/0.log" Feb 26 15:50:23 crc kubenswrapper[4809]: I0226 15:50:23.462471 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-njhx7_f52e8302-5dc1-4b5d-b571-29bd5e69f6a6/registry-server/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.070256 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bddb4c7bc-pt8jw_906a26fc-9fb3-4964-8c39-ef42e4915be5/prometheus-operator-admission-webhook/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.098281 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h5gk4_87348b90-199e-442d-a9ec-263588a8cc54/prometheus-operator/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.135421 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bddb4c7bc-2x2pw_b56a5ce7-761a-410a-84e8-41e01ad2b55e/prometheus-operator-admission-webhook/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.334763 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qq6nr_cc062236-67aa-4219-8e13-45ff2cf44f8e/operator/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.389688 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9tgqx_bb918b49-7bc0-40e4-b7a7-a4ab671e7911/perses-operator/0.log" Feb 26 15:50:39 crc kubenswrapper[4809]: I0226 15:50:39.413800 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vxfq4_a1694e2c-b193-496d-b2df-d4c8857e2cc2/observability-ui-dashboards/0.log" Feb 26 15:50:41 crc kubenswrapper[4809]: I0226 15:50:41.794470 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:50:41 crc kubenswrapper[4809]: I0226 15:50:41.795065 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:50:55 crc kubenswrapper[4809]: I0226 15:50:55.126641 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/manager/0.log" Feb 26 15:50:55 crc kubenswrapper[4809]: I0226 15:50:55.145529 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/kube-rbac-proxy/0.log" Feb 26 15:50:55 crc kubenswrapper[4809]: I0226 15:50:55.171086 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-57cd74799f-hkpdq_5be7c3b0-feda-4dfd-963c-17813fdc8651/manager/1.log" Feb 26 15:51:11 crc kubenswrapper[4809]: I0226 15:51:11.799856 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:51:11 crc kubenswrapper[4809]: I0226 15:51:11.800466 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:51:41 crc kubenswrapper[4809]: I0226 15:51:41.793455 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:51:41 crc kubenswrapper[4809]: I0226 15:51:41.793986 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:51:41 crc kubenswrapper[4809]: I0226 15:51:41.794056 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:51:41 crc kubenswrapper[4809]: I0226 15:51:41.794928 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:51:41 crc kubenswrapper[4809]: I0226 15:51:41.795379 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f" gracePeriod=600 Feb 26 15:51:42 crc kubenswrapper[4809]: I0226 15:51:42.253416 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f" exitCode=0 Feb 26 15:51:42 crc kubenswrapper[4809]: I0226 15:51:42.253488 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f"} Feb 26 15:51:42 crc kubenswrapper[4809]: I0226 15:51:42.253783 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f"} Feb 26 15:51:42 crc kubenswrapper[4809]: I0226 15:51:42.253804 4809 scope.go:117] "RemoveContainer" containerID="59e162b8871fff98ca17088be75cf089d2d2d1808dc1b08306fcfd33b0ea4567" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.472993 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:51:57 crc kubenswrapper[4809]: E0226 15:51:57.474116 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" containerName="oc" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.474134 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" containerName="oc" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.474459 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" containerName="oc" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.477522 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.503572 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.572271 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.572332 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.572366 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vf64\" (UniqueName: \"kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.674609 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.674668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.674691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vf64\" (UniqueName: \"kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.675580 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.675579 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.696097 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vf64\" (UniqueName: \"kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64\") pod \"redhat-marketplace-7b877\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:57 crc kubenswrapper[4809]: I0226 15:51:57.857753 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:51:58 crc kubenswrapper[4809]: I0226 15:51:58.589282 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:51:59 crc kubenswrapper[4809]: I0226 15:51:59.584161 4809 generic.go:334] "Generic (PLEG): container finished" podID="4ba3ef23-f595-448c-96c0-c405170c0490" containerID="60b9204e5aa2e39915a121a9d19a1e8c558268add0b253e84c69a42a0c09eaa3" exitCode=0 Feb 26 15:51:59 crc kubenswrapper[4809]: I0226 15:51:59.584252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerDied","Data":"60b9204e5aa2e39915a121a9d19a1e8c558268add0b253e84c69a42a0c09eaa3"} Feb 26 15:51:59 crc kubenswrapper[4809]: I0226 15:51:59.584473 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerStarted","Data":"59393182381b13193b3bf6009ec64446d670361b055ddac79cb67412eca1624e"} Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.151673 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535352-8m7qs"] Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.154220 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.156827 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.157320 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.157373 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.175810 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535352-8m7qs"] Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.242563 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbwsh\" (UniqueName: \"kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh\") pod \"auto-csr-approver-29535352-8m7qs\" (UID: \"4323b07a-4910-458d-bf1a-7ec4fb6b40e0\") " pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.344930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbwsh\" (UniqueName: \"kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh\") pod \"auto-csr-approver-29535352-8m7qs\" (UID: \"4323b07a-4910-458d-bf1a-7ec4fb6b40e0\") " pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.366429 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbwsh\" (UniqueName: \"kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh\") pod \"auto-csr-approver-29535352-8m7qs\" (UID: \"4323b07a-4910-458d-bf1a-7ec4fb6b40e0\") " pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.476530 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:00 crc kubenswrapper[4809]: I0226 15:52:00.999180 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535352-8m7qs"] Feb 26 15:52:01 crc kubenswrapper[4809]: W0226 15:52:01.010368 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4323b07a_4910_458d_bf1a_7ec4fb6b40e0.slice/crio-2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96 WatchSource:0}: Error finding container 2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96: Status 404 returned error can't find the container with id 2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96 Feb 26 15:52:01 crc kubenswrapper[4809]: I0226 15:52:01.629283 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" event={"ID":"4323b07a-4910-458d-bf1a-7ec4fb6b40e0","Type":"ContainerStarted","Data":"2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96"} Feb 26 15:52:01 crc kubenswrapper[4809]: I0226 15:52:01.631712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerStarted","Data":"11b3ca6a344394cef2efba38c8bc13ddbc40c89a6289ccf8c00c63996b174813"} Feb 26 15:52:03 crc kubenswrapper[4809]: I0226 15:52:03.656959 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" event={"ID":"4323b07a-4910-458d-bf1a-7ec4fb6b40e0","Type":"ContainerStarted","Data":"b26264436e2d3a53186e2a5c52d9c8ea5f7e04fa26dab5706269e8890cb68d1f"} Feb 26 15:52:03 crc kubenswrapper[4809]: I0226 15:52:03.686680 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" podStartSLOduration=2.407933322 podStartE2EDuration="3.686658157s" podCreationTimestamp="2026-02-26 15:52:00 +0000 UTC" firstStartedPulling="2026-02-26 15:52:01.008209218 +0000 UTC m=+5899.481529741" lastFinishedPulling="2026-02-26 15:52:02.286934053 +0000 UTC m=+5900.760254576" observedRunningTime="2026-02-26 15:52:03.675691975 +0000 UTC m=+5902.149012538" watchObservedRunningTime="2026-02-26 15:52:03.686658157 +0000 UTC m=+5902.159978680" Feb 26 15:52:04 crc kubenswrapper[4809]: I0226 15:52:04.671006 4809 generic.go:334] "Generic (PLEG): container finished" podID="4ba3ef23-f595-448c-96c0-c405170c0490" containerID="11b3ca6a344394cef2efba38c8bc13ddbc40c89a6289ccf8c00c63996b174813" exitCode=0 Feb 26 15:52:04 crc kubenswrapper[4809]: I0226 15:52:04.671061 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerDied","Data":"11b3ca6a344394cef2efba38c8bc13ddbc40c89a6289ccf8c00c63996b174813"} Feb 26 15:52:05 crc kubenswrapper[4809]: I0226 15:52:05.685170 4809 generic.go:334] "Generic (PLEG): container finished" podID="4323b07a-4910-458d-bf1a-7ec4fb6b40e0" containerID="b26264436e2d3a53186e2a5c52d9c8ea5f7e04fa26dab5706269e8890cb68d1f" exitCode=0 Feb 26 15:52:05 crc kubenswrapper[4809]: I0226 15:52:05.685265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" event={"ID":"4323b07a-4910-458d-bf1a-7ec4fb6b40e0","Type":"ContainerDied","Data":"b26264436e2d3a53186e2a5c52d9c8ea5f7e04fa26dab5706269e8890cb68d1f"} Feb 26 15:52:06 crc kubenswrapper[4809]: I0226 15:52:06.698117 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerStarted","Data":"ea75e202b536a997ca2809d8fe43afdb2c1ba9de69f9b3cc8c4b7bcdef5c8a60"} Feb 26 15:52:06 crc kubenswrapper[4809]: I0226 15:52:06.719363 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7b877" podStartSLOduration=3.861292444 podStartE2EDuration="9.719343009s" podCreationTimestamp="2026-02-26 15:51:57 +0000 UTC" firstStartedPulling="2026-02-26 15:51:59.586051347 +0000 UTC m=+5898.059371870" lastFinishedPulling="2026-02-26 15:52:05.444101872 +0000 UTC m=+5903.917422435" observedRunningTime="2026-02-26 15:52:06.716203129 +0000 UTC m=+5905.189523662" watchObservedRunningTime="2026-02-26 15:52:06.719343009 +0000 UTC m=+5905.192663532" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.392963 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.516354 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbwsh\" (UniqueName: \"kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh\") pod \"4323b07a-4910-458d-bf1a-7ec4fb6b40e0\" (UID: \"4323b07a-4910-458d-bf1a-7ec4fb6b40e0\") " Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.534880 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh" (OuterVolumeSpecName: "kube-api-access-vbwsh") pod "4323b07a-4910-458d-bf1a-7ec4fb6b40e0" (UID: "4323b07a-4910-458d-bf1a-7ec4fb6b40e0"). InnerVolumeSpecName "kube-api-access-vbwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.625156 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbwsh\" (UniqueName: \"kubernetes.io/projected/4323b07a-4910-458d-bf1a-7ec4fb6b40e0-kube-api-access-vbwsh\") on node \"crc\" DevicePath \"\"" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.714869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" event={"ID":"4323b07a-4910-458d-bf1a-7ec4fb6b40e0","Type":"ContainerDied","Data":"2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96"} Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.714911 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de3c1da15cea22f31601343b0e99847497014165d5df731f31c6f6f90a16e96" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.714971 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535352-8m7qs" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.774489 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535346-wgg47"] Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.789248 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535346-wgg47"] Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.858671 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:07 crc kubenswrapper[4809]: I0226 15:52:07.858742 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:08 crc kubenswrapper[4809]: I0226 15:52:08.270991 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b07699-ff45-4b3a-a61c-b8eccdaa792a" path="/var/lib/kubelet/pods/97b07699-ff45-4b3a-a61c-b8eccdaa792a/volumes" Feb 26 15:52:09 crc kubenswrapper[4809]: I0226 15:52:09.498316 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7b877" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="registry-server" probeResult="failure" output=< Feb 26 15:52:09 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:52:09 crc kubenswrapper[4809]: > Feb 26 15:52:17 crc kubenswrapper[4809]: I0226 15:52:17.920091 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:17 crc kubenswrapper[4809]: I0226 15:52:17.986261 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:18 crc kubenswrapper[4809]: I0226 15:52:18.167580 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:52:18 crc kubenswrapper[4809]: I0226 15:52:18.446154 4809 scope.go:117] "RemoveContainer" containerID="a49ee7b379f836f32327f1a5a92222d852b513d8f45ab98bbc6505547d8eed63" Feb 26 15:52:19 crc kubenswrapper[4809]: I0226 15:52:19.862119 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7b877" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="registry-server" containerID="cri-o://ea75e202b536a997ca2809d8fe43afdb2c1ba9de69f9b3cc8c4b7bcdef5c8a60" gracePeriod=2 Feb 26 15:52:20 crc kubenswrapper[4809]: I0226 15:52:20.875249 4809 generic.go:334] "Generic (PLEG): container finished" podID="4ba3ef23-f595-448c-96c0-c405170c0490" containerID="ea75e202b536a997ca2809d8fe43afdb2c1ba9de69f9b3cc8c4b7bcdef5c8a60" exitCode=0 Feb 26 15:52:20 crc kubenswrapper[4809]: I0226 15:52:20.875302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerDied","Data":"ea75e202b536a997ca2809d8fe43afdb2c1ba9de69f9b3cc8c4b7bcdef5c8a60"} Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.278671 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.342202 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content\") pod \"4ba3ef23-f595-448c-96c0-c405170c0490\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.342309 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities\") pod \"4ba3ef23-f595-448c-96c0-c405170c0490\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.342338 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vf64\" (UniqueName: \"kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64\") pod \"4ba3ef23-f595-448c-96c0-c405170c0490\" (UID: \"4ba3ef23-f595-448c-96c0-c405170c0490\") " Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.343211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities" (OuterVolumeSpecName: "utilities") pod "4ba3ef23-f595-448c-96c0-c405170c0490" (UID: "4ba3ef23-f595-448c-96c0-c405170c0490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.348507 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64" (OuterVolumeSpecName: "kube-api-access-6vf64") pod "4ba3ef23-f595-448c-96c0-c405170c0490" (UID: "4ba3ef23-f595-448c-96c0-c405170c0490"). InnerVolumeSpecName "kube-api-access-6vf64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.368703 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ba3ef23-f595-448c-96c0-c405170c0490" (UID: "4ba3ef23-f595-448c-96c0-c405170c0490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.448237 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.448278 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba3ef23-f595-448c-96c0-c405170c0490-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.448306 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vf64\" (UniqueName: \"kubernetes.io/projected/4ba3ef23-f595-448c-96c0-c405170c0490-kube-api-access-6vf64\") on node \"crc\" DevicePath \"\"" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.901758 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b877" event={"ID":"4ba3ef23-f595-448c-96c0-c405170c0490","Type":"ContainerDied","Data":"59393182381b13193b3bf6009ec64446d670361b055ddac79cb67412eca1624e"} Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.901815 4809 scope.go:117] "RemoveContainer" containerID="ea75e202b536a997ca2809d8fe43afdb2c1ba9de69f9b3cc8c4b7bcdef5c8a60" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.901827 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b877" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.942096 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.945739 4809 scope.go:117] "RemoveContainer" containerID="11b3ca6a344394cef2efba38c8bc13ddbc40c89a6289ccf8c00c63996b174813" Feb 26 15:52:21 crc kubenswrapper[4809]: I0226 15:52:21.957696 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b877"] Feb 26 15:52:22 crc kubenswrapper[4809]: I0226 15:52:22.278990 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" path="/var/lib/kubelet/pods/4ba3ef23-f595-448c-96c0-c405170c0490/volumes" Feb 26 15:52:22 crc kubenswrapper[4809]: I0226 15:52:22.301289 4809 scope.go:117] "RemoveContainer" containerID="60b9204e5aa2e39915a121a9d19a1e8c558268add0b253e84c69a42a0c09eaa3" Feb 26 15:53:12 crc kubenswrapper[4809]: E0226 15:53:12.006905 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19517d8a_cde4_45ff_88e0_4026e339e2d3.slice/crio-conmon-e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19517d8a_cde4_45ff_88e0_4026e339e2d3.slice/crio-e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12.scope\": RecentStats: unable to find data in memory cache]" Feb 26 15:53:12 crc kubenswrapper[4809]: I0226 15:53:12.557504 4809 generic.go:334] "Generic (PLEG): container finished" podID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerID="e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12" exitCode=0 Feb 26 15:53:12 crc kubenswrapper[4809]: I0226 15:53:12.557563 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" event={"ID":"19517d8a-cde4-45ff-88e0-4026e339e2d3","Type":"ContainerDied","Data":"e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12"} Feb 26 15:53:12 crc kubenswrapper[4809]: I0226 15:53:12.558480 4809 scope.go:117] "RemoveContainer" containerID="e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12" Feb 26 15:53:12 crc kubenswrapper[4809]: I0226 15:53:12.975491 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwnrr_must-gather-vtqpv_19517d8a-cde4-45ff-88e0-4026e339e2d3/gather/0.log" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.768002 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:53:18 crc kubenswrapper[4809]: E0226 15:53:18.769112 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="extract-content" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769127 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="extract-content" Feb 26 15:53:18 crc kubenswrapper[4809]: E0226 15:53:18.769155 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="extract-utilities" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769163 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="extract-utilities" Feb 26 15:53:18 crc kubenswrapper[4809]: E0226 15:53:18.769240 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4323b07a-4910-458d-bf1a-7ec4fb6b40e0" containerName="oc" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769249 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4323b07a-4910-458d-bf1a-7ec4fb6b40e0" containerName="oc" Feb 26 15:53:18 crc kubenswrapper[4809]: E0226 15:53:18.769272 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="registry-server" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769280 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="registry-server" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769590 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4323b07a-4910-458d-bf1a-7ec4fb6b40e0" containerName="oc" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.769636 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba3ef23-f595-448c-96c0-c405170c0490" containerName="registry-server" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.774042 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.784681 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.871739 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8tz\" (UniqueName: \"kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.872083 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.872166 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.975071 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.975145 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.975210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd8tz\" (UniqueName: \"kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.976192 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:18 crc kubenswrapper[4809]: I0226 15:53:18.976295 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:19 crc kubenswrapper[4809]: I0226 15:53:19.004456 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd8tz\" (UniqueName: \"kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz\") pod \"redhat-operators-2dms5\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:19 crc kubenswrapper[4809]: I0226 15:53:19.105532 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:19 crc kubenswrapper[4809]: I0226 15:53:19.591108 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:53:19 crc kubenswrapper[4809]: I0226 15:53:19.684454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerStarted","Data":"5d348d2a257dce4e33c487de409d3e56d81da9760e276dfd05bc52786db8c1a7"} Feb 26 15:53:20 crc kubenswrapper[4809]: I0226 15:53:20.701472 4809 generic.go:334] "Generic (PLEG): container finished" podID="f02bbe27-b181-4daa-9212-05d854e346aa" containerID="9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4" exitCode=0 Feb 26 15:53:20 crc kubenswrapper[4809]: I0226 15:53:20.701560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerDied","Data":"9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4"} Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.098491 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwnrr/must-gather-vtqpv"] Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.099320 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="copy" containerID="cri-o://c706dbb69b0785a81125588f1670c65372479ebcd778bb9948d39ad4304e4c56" gracePeriod=2 Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.116167 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwnrr/must-gather-vtqpv"] Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.727317 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerStarted","Data":"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2"} Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.733256 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwnrr_must-gather-vtqpv_19517d8a-cde4-45ff-88e0-4026e339e2d3/copy/0.log" Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.734005 4809 generic.go:334] "Generic (PLEG): container finished" podID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerID="c706dbb69b0785a81125588f1670c65372479ebcd778bb9948d39ad4304e4c56" exitCode=143 Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.897027 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwnrr_must-gather-vtqpv_19517d8a-cde4-45ff-88e0-4026e339e2d3/copy/0.log" Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.897432 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.980592 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2fqw\" (UniqueName: \"kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw\") pod \"19517d8a-cde4-45ff-88e0-4026e339e2d3\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.981247 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output\") pod \"19517d8a-cde4-45ff-88e0-4026e339e2d3\" (UID: \"19517d8a-cde4-45ff-88e0-4026e339e2d3\") " Feb 26 15:53:22 crc kubenswrapper[4809]: I0226 15:53:22.986367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw" (OuterVolumeSpecName: "kube-api-access-r2fqw") pod "19517d8a-cde4-45ff-88e0-4026e339e2d3" (UID: "19517d8a-cde4-45ff-88e0-4026e339e2d3"). InnerVolumeSpecName "kube-api-access-r2fqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.084891 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2fqw\" (UniqueName: \"kubernetes.io/projected/19517d8a-cde4-45ff-88e0-4026e339e2d3-kube-api-access-r2fqw\") on node \"crc\" DevicePath \"\"" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.208355 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "19517d8a-cde4-45ff-88e0-4026e339e2d3" (UID: "19517d8a-cde4-45ff-88e0-4026e339e2d3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.290537 4809 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19517d8a-cde4-45ff-88e0-4026e339e2d3-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.311681 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:23 crc kubenswrapper[4809]: E0226 15:53:23.312162 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="gather" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.312176 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="gather" Feb 26 15:53:23 crc kubenswrapper[4809]: E0226 15:53:23.312199 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="copy" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.312207 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="copy" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.312445 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="gather" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.312480 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" containerName="copy" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.314544 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.350729 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.392987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsbb\" (UniqueName: \"kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.393229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.393395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.496833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrsbb\" (UniqueName: \"kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.496907 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.497370 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.497514 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.497768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.514592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrsbb\" (UniqueName: \"kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb\") pod \"community-operators-8zn2n\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.662381 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.749215 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwnrr_must-gather-vtqpv_19517d8a-cde4-45ff-88e0-4026e339e2d3/copy/0.log" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.750877 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwnrr/must-gather-vtqpv" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.752313 4809 scope.go:117] "RemoveContainer" containerID="c706dbb69b0785a81125588f1670c65372479ebcd778bb9948d39ad4304e4c56" Feb 26 15:53:23 crc kubenswrapper[4809]: I0226 15:53:23.861756 4809 scope.go:117] "RemoveContainer" containerID="e6c532ca55ee93ce72014e047b219c0f9ba9a7b4aca2a75419f7d70bc1281f12" Feb 26 15:53:24 crc kubenswrapper[4809]: W0226 15:53:24.236390 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73ccd65d_b814_4f1d_8e60_7b4b56b3e6b5.slice/crio-8f3c3153cf41c51958a6f62391a8ac48f83c563ea529b56dac69dafabdc81ccf WatchSource:0}: Error finding container 8f3c3153cf41c51958a6f62391a8ac48f83c563ea529b56dac69dafabdc81ccf: Status 404 returned error can't find the container with id 8f3c3153cf41c51958a6f62391a8ac48f83c563ea529b56dac69dafabdc81ccf Feb 26 15:53:24 crc kubenswrapper[4809]: I0226 15:53:24.246611 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:24 crc kubenswrapper[4809]: I0226 15:53:24.299937 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19517d8a-cde4-45ff-88e0-4026e339e2d3" path="/var/lib/kubelet/pods/19517d8a-cde4-45ff-88e0-4026e339e2d3/volumes" Feb 26 15:53:24 crc kubenswrapper[4809]: I0226 15:53:24.764408 4809 generic.go:334] "Generic (PLEG): container finished" podID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerID="c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065" exitCode=0 Feb 26 15:53:24 crc kubenswrapper[4809]: I0226 15:53:24.764529 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerDied","Data":"c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065"} Feb 26 15:53:24 crc kubenswrapper[4809]: I0226 15:53:24.764702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerStarted","Data":"8f3c3153cf41c51958a6f62391a8ac48f83c563ea529b56dac69dafabdc81ccf"} Feb 26 15:53:26 crc kubenswrapper[4809]: I0226 15:53:26.789906 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerStarted","Data":"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07"} Feb 26 15:53:28 crc kubenswrapper[4809]: I0226 15:53:28.819790 4809 generic.go:334] "Generic (PLEG): container finished" podID="f02bbe27-b181-4daa-9212-05d854e346aa" containerID="1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2" exitCode=0 Feb 26 15:53:28 crc kubenswrapper[4809]: I0226 15:53:28.821682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerDied","Data":"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2"} Feb 26 15:53:29 crc kubenswrapper[4809]: I0226 15:53:29.835489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerStarted","Data":"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36"} Feb 26 15:53:29 crc kubenswrapper[4809]: I0226 15:53:29.869004 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2dms5" podStartSLOduration=3.2383673809999998 podStartE2EDuration="11.868983214s" podCreationTimestamp="2026-02-26 15:53:18 +0000 UTC" firstStartedPulling="2026-02-26 15:53:20.70456448 +0000 UTC m=+5979.177885003" lastFinishedPulling="2026-02-26 15:53:29.335180303 +0000 UTC m=+5987.808500836" observedRunningTime="2026-02-26 15:53:29.863523991 +0000 UTC m=+5988.336844514" watchObservedRunningTime="2026-02-26 15:53:29.868983214 +0000 UTC m=+5988.342303747" Feb 26 15:53:30 crc kubenswrapper[4809]: I0226 15:53:30.849167 4809 generic.go:334] "Generic (PLEG): container finished" podID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerID="1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07" exitCode=0 Feb 26 15:53:30 crc kubenswrapper[4809]: I0226 15:53:30.849214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerDied","Data":"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07"} Feb 26 15:53:31 crc kubenswrapper[4809]: I0226 15:53:31.861571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerStarted","Data":"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246"} Feb 26 15:53:31 crc kubenswrapper[4809]: I0226 15:53:31.887807 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zn2n" podStartSLOduration=2.373150196 podStartE2EDuration="8.887780875s" podCreationTimestamp="2026-02-26 15:53:23 +0000 UTC" firstStartedPulling="2026-02-26 15:53:24.7663485 +0000 UTC m=+5983.239669023" lastFinishedPulling="2026-02-26 15:53:31.280979169 +0000 UTC m=+5989.754299702" observedRunningTime="2026-02-26 15:53:31.878760722 +0000 UTC m=+5990.352081245" watchObservedRunningTime="2026-02-26 15:53:31.887780875 +0000 UTC m=+5990.361101398" Feb 26 15:53:33 crc kubenswrapper[4809]: I0226 15:53:33.663487 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:33 crc kubenswrapper[4809]: I0226 15:53:33.663960 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:33 crc kubenswrapper[4809]: I0226 15:53:33.720109 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:39 crc kubenswrapper[4809]: I0226 15:53:39.105909 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:39 crc kubenswrapper[4809]: I0226 15:53:39.106314 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:53:40 crc kubenswrapper[4809]: I0226 15:53:40.165677 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2dms5" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" probeResult="failure" output=< Feb 26 15:53:40 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:53:40 crc kubenswrapper[4809]: > Feb 26 15:53:43 crc kubenswrapper[4809]: I0226 15:53:43.720920 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:43 crc kubenswrapper[4809]: I0226 15:53:43.781559 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:43 crc kubenswrapper[4809]: I0226 15:53:43.987889 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zn2n" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="registry-server" containerID="cri-o://2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246" gracePeriod=2 Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.570600 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.682705 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrsbb\" (UniqueName: \"kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb\") pod \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.683123 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities\") pod \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.683219 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content\") pod \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\" (UID: \"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5\") " Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.683801 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities" (OuterVolumeSpecName: "utilities") pod "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" (UID: "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.688199 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb" (OuterVolumeSpecName: "kube-api-access-wrsbb") pod "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" (UID: "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5"). InnerVolumeSpecName "kube-api-access-wrsbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.745764 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" (UID: "73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.785854 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.785889 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrsbb\" (UniqueName: \"kubernetes.io/projected/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-kube-api-access-wrsbb\") on node \"crc\" DevicePath \"\"" Feb 26 15:53:44 crc kubenswrapper[4809]: I0226 15:53:44.785899 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.000461 4809 generic.go:334] "Generic (PLEG): container finished" podID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerID="2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246" exitCode=0 Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.000503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerDied","Data":"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246"} Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.000529 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn2n" event={"ID":"73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5","Type":"ContainerDied","Data":"8f3c3153cf41c51958a6f62391a8ac48f83c563ea529b56dac69dafabdc81ccf"} Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.000545 4809 scope.go:117] "RemoveContainer" containerID="2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.000673 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn2n" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.037627 4809 scope.go:117] "RemoveContainer" containerID="1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.043444 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.054908 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zn2n"] Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.059963 4809 scope.go:117] "RemoveContainer" containerID="c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.147128 4809 scope.go:117] "RemoveContainer" containerID="2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246" Feb 26 15:53:45 crc kubenswrapper[4809]: E0226 15:53:45.147950 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246\": container with ID starting with 2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246 not found: ID does not exist" containerID="2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.148038 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246"} err="failed to get container status \"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246\": rpc error: code = NotFound desc = could not find container \"2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246\": container with ID starting with 2800637231f571f11975ba6f67a71881bb94dfe70347383a83b8bebcb9435246 not found: ID does not exist" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.148068 4809 scope.go:117] "RemoveContainer" containerID="1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07" Feb 26 15:53:45 crc kubenswrapper[4809]: E0226 15:53:45.148717 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07\": container with ID starting with 1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07 not found: ID does not exist" containerID="1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.148740 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07"} err="failed to get container status \"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07\": rpc error: code = NotFound desc = could not find container \"1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07\": container with ID starting with 1205dceb0bc722b396c80cb61a3b58a4538750274b8972d2ee15833d09058a07 not found: ID does not exist" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.148754 4809 scope.go:117] "RemoveContainer" containerID="c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065" Feb 26 15:53:45 crc kubenswrapper[4809]: E0226 15:53:45.149195 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065\": container with ID starting with c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065 not found: ID does not exist" containerID="c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065" Feb 26 15:53:45 crc kubenswrapper[4809]: I0226 15:53:45.149239 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065"} err="failed to get container status \"c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065\": rpc error: code = NotFound desc = could not find container \"c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065\": container with ID starting with c4795a1ee45de35449d3e87f3e152a0b57e78fcbc41f06bbfa1bf1649b58d065 not found: ID does not exist" Feb 26 15:53:46 crc kubenswrapper[4809]: I0226 15:53:46.272324 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" path="/var/lib/kubelet/pods/73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5/volumes" Feb 26 15:53:50 crc kubenswrapper[4809]: I0226 15:53:50.497774 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2dms5" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" probeResult="failure" output=< Feb 26 15:53:50 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:53:50 crc kubenswrapper[4809]: > Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.153394 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535354-vm8jj"] Feb 26 15:54:00 crc kubenswrapper[4809]: E0226 15:54:00.154558 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="extract-content" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.154577 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="extract-content" Feb 26 15:54:00 crc kubenswrapper[4809]: E0226 15:54:00.154612 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="registry-server" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.154618 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="registry-server" Feb 26 15:54:00 crc kubenswrapper[4809]: E0226 15:54:00.154627 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="extract-utilities" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.154635 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="extract-utilities" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.154879 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ccd65d-b814-4f1d-8e60-7b4b56b3e6b5" containerName="registry-server" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.155828 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.158411 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.158958 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.160456 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.166420 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535354-vm8jj"] Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.174714 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2dms5" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" probeResult="failure" output=< Feb 26 15:54:00 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:54:00 crc kubenswrapper[4809]: > Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.195909 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c96bp\" (UniqueName: \"kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp\") pod \"auto-csr-approver-29535354-vm8jj\" (UID: \"1240cef5-a93a-4936-87e4-2eaf4c96476b\") " pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.299516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c96bp\" (UniqueName: \"kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp\") pod \"auto-csr-approver-29535354-vm8jj\" (UID: \"1240cef5-a93a-4936-87e4-2eaf4c96476b\") " pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.322251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c96bp\" (UniqueName: \"kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp\") pod \"auto-csr-approver-29535354-vm8jj\" (UID: \"1240cef5-a93a-4936-87e4-2eaf4c96476b\") " pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:00 crc kubenswrapper[4809]: I0226 15:54:00.477640 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:01 crc kubenswrapper[4809]: I0226 15:54:01.934512 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535354-vm8jj"] Feb 26 15:54:02 crc kubenswrapper[4809]: I0226 15:54:02.225687 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" event={"ID":"1240cef5-a93a-4936-87e4-2eaf4c96476b","Type":"ContainerStarted","Data":"d5c92d28b95d5f996ff631f12897567eee761fd38cf9d7054e1ee0d36316d591"} Feb 26 15:54:04 crc kubenswrapper[4809]: I0226 15:54:04.269115 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" event={"ID":"1240cef5-a93a-4936-87e4-2eaf4c96476b","Type":"ContainerStarted","Data":"bdef9860011a61a194b20c81596b3006aeceb3e3c7fca8f752afd578bc95e402"} Feb 26 15:54:04 crc kubenswrapper[4809]: I0226 15:54:04.282144 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" podStartSLOduration=3.195031213 podStartE2EDuration="4.282119741s" podCreationTimestamp="2026-02-26 15:54:00 +0000 UTC" firstStartedPulling="2026-02-26 15:54:01.942808547 +0000 UTC m=+6020.416129070" lastFinishedPulling="2026-02-26 15:54:03.029897075 +0000 UTC m=+6021.503217598" observedRunningTime="2026-02-26 15:54:04.275687611 +0000 UTC m=+6022.749008134" watchObservedRunningTime="2026-02-26 15:54:04.282119741 +0000 UTC m=+6022.755440264" Feb 26 15:54:05 crc kubenswrapper[4809]: I0226 15:54:05.273224 4809 generic.go:334] "Generic (PLEG): container finished" podID="1240cef5-a93a-4936-87e4-2eaf4c96476b" containerID="bdef9860011a61a194b20c81596b3006aeceb3e3c7fca8f752afd578bc95e402" exitCode=0 Feb 26 15:54:05 crc kubenswrapper[4809]: I0226 15:54:05.273298 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" event={"ID":"1240cef5-a93a-4936-87e4-2eaf4c96476b","Type":"ContainerDied","Data":"bdef9860011a61a194b20c81596b3006aeceb3e3c7fca8f752afd578bc95e402"} Feb 26 15:54:06 crc kubenswrapper[4809]: I0226 15:54:06.709632 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:06 crc kubenswrapper[4809]: I0226 15:54:06.720143 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c96bp\" (UniqueName: \"kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp\") pod \"1240cef5-a93a-4936-87e4-2eaf4c96476b\" (UID: \"1240cef5-a93a-4936-87e4-2eaf4c96476b\") " Feb 26 15:54:06 crc kubenswrapper[4809]: I0226 15:54:06.756628 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp" (OuterVolumeSpecName: "kube-api-access-c96bp") pod "1240cef5-a93a-4936-87e4-2eaf4c96476b" (UID: "1240cef5-a93a-4936-87e4-2eaf4c96476b"). InnerVolumeSpecName "kube-api-access-c96bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:54:06 crc kubenswrapper[4809]: I0226 15:54:06.827152 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c96bp\" (UniqueName: \"kubernetes.io/projected/1240cef5-a93a-4936-87e4-2eaf4c96476b-kube-api-access-c96bp\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:07 crc kubenswrapper[4809]: I0226 15:54:07.296000 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" event={"ID":"1240cef5-a93a-4936-87e4-2eaf4c96476b","Type":"ContainerDied","Data":"d5c92d28b95d5f996ff631f12897567eee761fd38cf9d7054e1ee0d36316d591"} Feb 26 15:54:07 crc kubenswrapper[4809]: I0226 15:54:07.296099 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c92d28b95d5f996ff631f12897567eee761fd38cf9d7054e1ee0d36316d591" Feb 26 15:54:07 crc kubenswrapper[4809]: I0226 15:54:07.296156 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535354-vm8jj" Feb 26 15:54:07 crc kubenswrapper[4809]: I0226 15:54:07.387740 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535348-9qgm6"] Feb 26 15:54:07 crc kubenswrapper[4809]: I0226 15:54:07.400179 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535348-9qgm6"] Feb 26 15:54:08 crc kubenswrapper[4809]: I0226 15:54:08.276705 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d1a6b3-fb07-4247-9ac3-3668cbd08b5e" path="/var/lib/kubelet/pods/41d1a6b3-fb07-4247-9ac3-3668cbd08b5e/volumes" Feb 26 15:54:10 crc kubenswrapper[4809]: I0226 15:54:10.173646 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2dms5" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" probeResult="failure" output=< Feb 26 15:54:10 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Feb 26 15:54:10 crc kubenswrapper[4809]: > Feb 26 15:54:11 crc kubenswrapper[4809]: I0226 15:54:11.794347 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:54:11 crc kubenswrapper[4809]: I0226 15:54:11.794675 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:54:18 crc kubenswrapper[4809]: I0226 15:54:18.625318 4809 scope.go:117] "RemoveContainer" containerID="2468b380823ce4803aeec1682408d0f30a9f371e494c014e9118ed5c7e830bea" Feb 26 15:54:19 crc kubenswrapper[4809]: I0226 15:54:19.167590 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:54:19 crc kubenswrapper[4809]: I0226 15:54:19.248274 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:54:19 crc kubenswrapper[4809]: I0226 15:54:19.973390 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:54:20 crc kubenswrapper[4809]: I0226 15:54:20.443232 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2dms5" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" containerID="cri-o://88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36" gracePeriod=2 Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.045238 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.127266 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content\") pod \"f02bbe27-b181-4daa-9212-05d854e346aa\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.127382 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities\") pod \"f02bbe27-b181-4daa-9212-05d854e346aa\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.127505 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd8tz\" (UniqueName: \"kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz\") pod \"f02bbe27-b181-4daa-9212-05d854e346aa\" (UID: \"f02bbe27-b181-4daa-9212-05d854e346aa\") " Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.128293 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities" (OuterVolumeSpecName: "utilities") pod "f02bbe27-b181-4daa-9212-05d854e346aa" (UID: "f02bbe27-b181-4daa-9212-05d854e346aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.137617 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz" (OuterVolumeSpecName: "kube-api-access-wd8tz") pod "f02bbe27-b181-4daa-9212-05d854e346aa" (UID: "f02bbe27-b181-4daa-9212-05d854e346aa"). InnerVolumeSpecName "kube-api-access-wd8tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.230573 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd8tz\" (UniqueName: \"kubernetes.io/projected/f02bbe27-b181-4daa-9212-05d854e346aa-kube-api-access-wd8tz\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.230604 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.256738 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f02bbe27-b181-4daa-9212-05d854e346aa" (UID: "f02bbe27-b181-4daa-9212-05d854e346aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.333118 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f02bbe27-b181-4daa-9212-05d854e346aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.461136 4809 generic.go:334] "Generic (PLEG): container finished" podID="f02bbe27-b181-4daa-9212-05d854e346aa" containerID="88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36" exitCode=0 Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.461200 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerDied","Data":"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36"} Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.461237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2dms5" event={"ID":"f02bbe27-b181-4daa-9212-05d854e346aa","Type":"ContainerDied","Data":"5d348d2a257dce4e33c487de409d3e56d81da9760e276dfd05bc52786db8c1a7"} Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.461260 4809 scope.go:117] "RemoveContainer" containerID="88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.461333 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2dms5" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.501142 4809 scope.go:117] "RemoveContainer" containerID="1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.526975 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.541314 4809 scope.go:117] "RemoveContainer" containerID="9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.546219 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2dms5"] Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.611188 4809 scope.go:117] "RemoveContainer" containerID="88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36" Feb 26 15:54:21 crc kubenswrapper[4809]: E0226 15:54:21.611946 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36\": container with ID starting with 88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36 not found: ID does not exist" containerID="88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.611996 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36"} err="failed to get container status \"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36\": rpc error: code = NotFound desc = could not find container \"88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36\": container with ID starting with 88a0c2bfa954a8ed646fdbd812f2e1dd5d406f7592bb35749946510d25451d36 not found: ID does not exist" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.612042 4809 scope.go:117] "RemoveContainer" containerID="1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2" Feb 26 15:54:21 crc kubenswrapper[4809]: E0226 15:54:21.612568 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2\": container with ID starting with 1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2 not found: ID does not exist" containerID="1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.612608 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2"} err="failed to get container status \"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2\": rpc error: code = NotFound desc = could not find container \"1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2\": container with ID starting with 1eb41f5d0043162422f2857acc097f0a079f0ca2de617eb7cbbb2fee7081aab2 not found: ID does not exist" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.612650 4809 scope.go:117] "RemoveContainer" containerID="9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4" Feb 26 15:54:21 crc kubenswrapper[4809]: E0226 15:54:21.613126 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4\": container with ID starting with 9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4 not found: ID does not exist" containerID="9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4" Feb 26 15:54:21 crc kubenswrapper[4809]: I0226 15:54:21.613165 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4"} err="failed to get container status \"9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4\": rpc error: code = NotFound desc = could not find container \"9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4\": container with ID starting with 9d8d0ff5c6fb14b4f38f0c9850f4334f680260f66d58a17b276ef24cf13a0bd4 not found: ID does not exist" Feb 26 15:54:22 crc kubenswrapper[4809]: I0226 15:54:22.277987 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" path="/var/lib/kubelet/pods/f02bbe27-b181-4daa-9212-05d854e346aa/volumes" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.337882 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:26 crc kubenswrapper[4809]: E0226 15:54:26.339184 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1240cef5-a93a-4936-87e4-2eaf4c96476b" containerName="oc" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339203 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1240cef5-a93a-4936-87e4-2eaf4c96476b" containerName="oc" Feb 26 15:54:26 crc kubenswrapper[4809]: E0226 15:54:26.339252 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339262 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" Feb 26 15:54:26 crc kubenswrapper[4809]: E0226 15:54:26.339287 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="extract-content" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339294 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="extract-content" Feb 26 15:54:26 crc kubenswrapper[4809]: E0226 15:54:26.339315 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="extract-utilities" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339323 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="extract-utilities" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339609 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1240cef5-a93a-4936-87e4-2eaf4c96476b" containerName="oc" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.339656 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02bbe27-b181-4daa-9212-05d854e346aa" containerName="registry-server" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.342361 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.355488 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.412342 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhrv\" (UniqueName: \"kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.412606 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.413040 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.514552 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.514707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.514769 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crhrv\" (UniqueName: \"kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.515301 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.515332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.948107 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crhrv\" (UniqueName: \"kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv\") pod \"certified-operators-z75q8\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:26 crc kubenswrapper[4809]: I0226 15:54:26.982082 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:27 crc kubenswrapper[4809]: I0226 15:54:27.532339 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:27 crc kubenswrapper[4809]: I0226 15:54:27.565526 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerStarted","Data":"80e5988f596a997fac4e2828244a692ae3a0c35dc5f7a9b7958db8cf83a87862"} Feb 26 15:54:28 crc kubenswrapper[4809]: I0226 15:54:28.578146 4809 generic.go:334] "Generic (PLEG): container finished" podID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerID="ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8" exitCode=0 Feb 26 15:54:28 crc kubenswrapper[4809]: I0226 15:54:28.578231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerDied","Data":"ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8"} Feb 26 15:54:29 crc kubenswrapper[4809]: I0226 15:54:29.601777 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerStarted","Data":"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1"} Feb 26 15:54:31 crc kubenswrapper[4809]: I0226 15:54:31.630326 4809 generic.go:334] "Generic (PLEG): container finished" podID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerID="b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1" exitCode=0 Feb 26 15:54:31 crc kubenswrapper[4809]: I0226 15:54:31.630373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerDied","Data":"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1"} Feb 26 15:54:32 crc kubenswrapper[4809]: I0226 15:54:32.646309 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerStarted","Data":"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a"} Feb 26 15:54:32 crc kubenswrapper[4809]: I0226 15:54:32.683782 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z75q8" podStartSLOduration=3.037520278 podStartE2EDuration="6.683762813s" podCreationTimestamp="2026-02-26 15:54:26 +0000 UTC" firstStartedPulling="2026-02-26 15:54:28.581754105 +0000 UTC m=+6047.055074628" lastFinishedPulling="2026-02-26 15:54:32.22799664 +0000 UTC m=+6050.701317163" observedRunningTime="2026-02-26 15:54:32.669587555 +0000 UTC m=+6051.142908088" watchObservedRunningTime="2026-02-26 15:54:32.683762813 +0000 UTC m=+6051.157083336" Feb 26 15:54:36 crc kubenswrapper[4809]: I0226 15:54:36.982253 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:36 crc kubenswrapper[4809]: I0226 15:54:36.983968 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:37 crc kubenswrapper[4809]: I0226 15:54:37.036873 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:37 crc kubenswrapper[4809]: I0226 15:54:37.758625 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:37 crc kubenswrapper[4809]: I0226 15:54:37.812153 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:39 crc kubenswrapper[4809]: I0226 15:54:39.721893 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z75q8" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="registry-server" containerID="cri-o://40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a" gracePeriod=2 Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.221339 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.368668 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crhrv\" (UniqueName: \"kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv\") pod \"37ce52b1-edb5-4fae-b894-fa12659a67e2\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.369037 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities\") pod \"37ce52b1-edb5-4fae-b894-fa12659a67e2\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.369201 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content\") pod \"37ce52b1-edb5-4fae-b894-fa12659a67e2\" (UID: \"37ce52b1-edb5-4fae-b894-fa12659a67e2\") " Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.379493 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv" (OuterVolumeSpecName: "kube-api-access-crhrv") pod "37ce52b1-edb5-4fae-b894-fa12659a67e2" (UID: "37ce52b1-edb5-4fae-b894-fa12659a67e2"). InnerVolumeSpecName "kube-api-access-crhrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.382044 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities" (OuterVolumeSpecName: "utilities") pod "37ce52b1-edb5-4fae-b894-fa12659a67e2" (UID: "37ce52b1-edb5-4fae-b894-fa12659a67e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.430167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37ce52b1-edb5-4fae-b894-fa12659a67e2" (UID: "37ce52b1-edb5-4fae-b894-fa12659a67e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.473499 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.473532 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crhrv\" (UniqueName: \"kubernetes.io/projected/37ce52b1-edb5-4fae-b894-fa12659a67e2-kube-api-access-crhrv\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.473542 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ce52b1-edb5-4fae-b894-fa12659a67e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.736793 4809 generic.go:334] "Generic (PLEG): container finished" podID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerID="40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a" exitCode=0 Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.736858 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerDied","Data":"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a"} Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.736870 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z75q8" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.736912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z75q8" event={"ID":"37ce52b1-edb5-4fae-b894-fa12659a67e2","Type":"ContainerDied","Data":"80e5988f596a997fac4e2828244a692ae3a0c35dc5f7a9b7958db8cf83a87862"} Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.736943 4809 scope.go:117] "RemoveContainer" containerID="40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.790482 4809 scope.go:117] "RemoveContainer" containerID="b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.819678 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.825264 4809 scope.go:117] "RemoveContainer" containerID="ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.830810 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z75q8"] Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.895459 4809 scope.go:117] "RemoveContainer" containerID="40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a" Feb 26 15:54:40 crc kubenswrapper[4809]: E0226 15:54:40.895879 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a\": container with ID starting with 40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a not found: ID does not exist" containerID="40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.895911 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a"} err="failed to get container status \"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a\": rpc error: code = NotFound desc = could not find container \"40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a\": container with ID starting with 40588dbce98fa2c75ee6aaeb02f457ae5bbde462c0f291b96e315f1c08e8f50a not found: ID does not exist" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.895932 4809 scope.go:117] "RemoveContainer" containerID="b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1" Feb 26 15:54:40 crc kubenswrapper[4809]: E0226 15:54:40.896320 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1\": container with ID starting with b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1 not found: ID does not exist" containerID="b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.896354 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1"} err="failed to get container status \"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1\": rpc error: code = NotFound desc = could not find container \"b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1\": container with ID starting with b720e53d676ca6ae804359aecfd233a93c33ca5bd563a25f0ae95653f876a3e1 not found: ID does not exist" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.896374 4809 scope.go:117] "RemoveContainer" containerID="ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8" Feb 26 15:54:40 crc kubenswrapper[4809]: E0226 15:54:40.896727 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8\": container with ID starting with ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8 not found: ID does not exist" containerID="ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8" Feb 26 15:54:40 crc kubenswrapper[4809]: I0226 15:54:40.896761 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8"} err="failed to get container status \"ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8\": rpc error: code = NotFound desc = could not find container \"ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8\": container with ID starting with ba851045492fcbfdc73aa2272e214e1727f4978d908b647b512310c820e60eb8 not found: ID does not exist" Feb 26 15:54:41 crc kubenswrapper[4809]: I0226 15:54:41.794413 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:54:41 crc kubenswrapper[4809]: I0226 15:54:41.794787 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:54:42 crc kubenswrapper[4809]: I0226 15:54:42.271001 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" path="/var/lib/kubelet/pods/37ce52b1-edb5-4fae-b894-fa12659a67e2/volumes" Feb 26 15:55:11 crc kubenswrapper[4809]: I0226 15:55:11.793796 4809 patch_prober.go:28] interesting pod/machine-config-daemon-72xsh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 15:55:11 crc kubenswrapper[4809]: I0226 15:55:11.794402 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 15:55:11 crc kubenswrapper[4809]: I0226 15:55:11.794459 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" Feb 26 15:55:11 crc kubenswrapper[4809]: I0226 15:55:11.795284 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f"} pod="openshift-machine-config-operator/machine-config-daemon-72xsh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 15:55:11 crc kubenswrapper[4809]: I0226 15:55:11.795343 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerName="machine-config-daemon" containerID="cri-o://799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" gracePeriod=600 Feb 26 15:55:11 crc kubenswrapper[4809]: E0226 15:55:11.921302 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:55:12 crc kubenswrapper[4809]: I0226 15:55:12.143066 4809 generic.go:334] "Generic (PLEG): container finished" podID="2ee5dfae-6391-4988-900c-e8abcb031d30" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" exitCode=0 Feb 26 15:55:12 crc kubenswrapper[4809]: I0226 15:55:12.143120 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerDied","Data":"799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f"} Feb 26 15:55:12 crc kubenswrapper[4809]: I0226 15:55:12.143159 4809 scope.go:117] "RemoveContainer" containerID="4b890d2855331441c7f5148a2a8a9869ace215746753224df830035be32ef30f" Feb 26 15:55:12 crc kubenswrapper[4809]: I0226 15:55:12.144081 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:55:12 crc kubenswrapper[4809]: E0226 15:55:12.144509 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:55:25 crc kubenswrapper[4809]: I0226 15:55:25.256900 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:55:25 crc kubenswrapper[4809]: E0226 15:55:25.257914 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:55:37 crc kubenswrapper[4809]: I0226 15:55:37.257060 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:55:37 crc kubenswrapper[4809]: E0226 15:55:37.257998 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:55:51 crc kubenswrapper[4809]: I0226 15:55:51.256617 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:55:51 crc kubenswrapper[4809]: E0226 15:55:51.257653 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.175273 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535356-jcklj"] Feb 26 15:56:00 crc kubenswrapper[4809]: E0226 15:56:00.176575 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="extract-content" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.176594 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="extract-content" Feb 26 15:56:00 crc kubenswrapper[4809]: E0226 15:56:00.176614 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="registry-server" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.176621 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="registry-server" Feb 26 15:56:00 crc kubenswrapper[4809]: E0226 15:56:00.176643 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="extract-utilities" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.176652 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="extract-utilities" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.176991 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ce52b1-edb5-4fae-b894-fa12659a67e2" containerName="registry-server" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.178170 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.180180 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.181377 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.181579 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.200185 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535356-jcklj"] Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.223220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvjz\" (UniqueName: \"kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz\") pod \"auto-csr-approver-29535356-jcklj\" (UID: \"cef6bdf1-69ed-48cd-b474-ec63bb566023\") " pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.326276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvjz\" (UniqueName: \"kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz\") pod \"auto-csr-approver-29535356-jcklj\" (UID: \"cef6bdf1-69ed-48cd-b474-ec63bb566023\") " pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.348215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvjz\" (UniqueName: \"kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz\") pod \"auto-csr-approver-29535356-jcklj\" (UID: \"cef6bdf1-69ed-48cd-b474-ec63bb566023\") " pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:00 crc kubenswrapper[4809]: I0226 15:56:00.509587 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:01 crc kubenswrapper[4809]: W0226 15:56:01.031177 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcef6bdf1_69ed_48cd_b474_ec63bb566023.slice/crio-bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5 WatchSource:0}: Error finding container bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5: Status 404 returned error can't find the container with id bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5 Feb 26 15:56:01 crc kubenswrapper[4809]: I0226 15:56:01.033810 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 15:56:01 crc kubenswrapper[4809]: I0226 15:56:01.036704 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535356-jcklj"] Feb 26 15:56:01 crc kubenswrapper[4809]: I0226 15:56:01.822742 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535356-jcklj" event={"ID":"cef6bdf1-69ed-48cd-b474-ec63bb566023","Type":"ContainerStarted","Data":"bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5"} Feb 26 15:56:02 crc kubenswrapper[4809]: I0226 15:56:02.833733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535356-jcklj" event={"ID":"cef6bdf1-69ed-48cd-b474-ec63bb566023","Type":"ContainerStarted","Data":"4508511b989536cd2617fc8b5bfdf0cd09cf242d8efac66fbd6f0c5f7a2158ec"} Feb 26 15:56:02 crc kubenswrapper[4809]: I0226 15:56:02.856492 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535356-jcklj" podStartSLOduration=1.975410772 podStartE2EDuration="2.856471505s" podCreationTimestamp="2026-02-26 15:56:00 +0000 UTC" firstStartedPulling="2026-02-26 15:56:01.033485512 +0000 UTC m=+6139.506806035" lastFinishedPulling="2026-02-26 15:56:01.914546255 +0000 UTC m=+6140.387866768" observedRunningTime="2026-02-26 15:56:02.846062793 +0000 UTC m=+6141.319383326" watchObservedRunningTime="2026-02-26 15:56:02.856471505 +0000 UTC m=+6141.329792028" Feb 26 15:56:03 crc kubenswrapper[4809]: I0226 15:56:03.847708 4809 generic.go:334] "Generic (PLEG): container finished" podID="cef6bdf1-69ed-48cd-b474-ec63bb566023" containerID="4508511b989536cd2617fc8b5bfdf0cd09cf242d8efac66fbd6f0c5f7a2158ec" exitCode=0 Feb 26 15:56:03 crc kubenswrapper[4809]: I0226 15:56:03.847806 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535356-jcklj" event={"ID":"cef6bdf1-69ed-48cd-b474-ec63bb566023","Type":"ContainerDied","Data":"4508511b989536cd2617fc8b5bfdf0cd09cf242d8efac66fbd6f0c5f7a2158ec"} Feb 26 15:56:04 crc kubenswrapper[4809]: I0226 15:56:04.257820 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:56:04 crc kubenswrapper[4809]: E0226 15:56:04.258573 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.333612 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.410893 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvjz\" (UniqueName: \"kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz\") pod \"cef6bdf1-69ed-48cd-b474-ec63bb566023\" (UID: \"cef6bdf1-69ed-48cd-b474-ec63bb566023\") " Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.419327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz" (OuterVolumeSpecName: "kube-api-access-4zvjz") pod "cef6bdf1-69ed-48cd-b474-ec63bb566023" (UID: "cef6bdf1-69ed-48cd-b474-ec63bb566023"). InnerVolumeSpecName "kube-api-access-4zvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.515223 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvjz\" (UniqueName: \"kubernetes.io/projected/cef6bdf1-69ed-48cd-b474-ec63bb566023-kube-api-access-4zvjz\") on node \"crc\" DevicePath \"\"" Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.878606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535356-jcklj" event={"ID":"cef6bdf1-69ed-48cd-b474-ec63bb566023","Type":"ContainerDied","Data":"bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5"} Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.878653 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4597b32717fab5b6a0932424586a8a6a87fb2e1452d8cdc32a20fc280440e5" Feb 26 15:56:05 crc kubenswrapper[4809]: I0226 15:56:05.878658 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535356-jcklj" Feb 26 15:56:06 crc kubenswrapper[4809]: I0226 15:56:06.432693 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535350-jr4m6"] Feb 26 15:56:06 crc kubenswrapper[4809]: I0226 15:56:06.447932 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535350-jr4m6"] Feb 26 15:56:08 crc kubenswrapper[4809]: I0226 15:56:08.273820 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6847cdd0-5d36-47b9-b7a4-41d0be68a1cf" path="/var/lib/kubelet/pods/6847cdd0-5d36-47b9-b7a4-41d0be68a1cf/volumes" Feb 26 15:56:18 crc kubenswrapper[4809]: I0226 15:56:18.258060 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:56:18 crc kubenswrapper[4809]: E0226 15:56:18.259283 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:56:18 crc kubenswrapper[4809]: I0226 15:56:18.880160 4809 scope.go:117] "RemoveContainer" containerID="ae486067bd8678fdad26dc4e6bb8a13025e24715b95b155a47d810695c130aa8" Feb 26 15:56:31 crc kubenswrapper[4809]: I0226 15:56:31.257346 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:56:31 crc kubenswrapper[4809]: E0226 15:56:31.258313 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:56:44 crc kubenswrapper[4809]: I0226 15:56:44.256982 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:56:44 crc kubenswrapper[4809]: E0226 15:56:44.257651 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:56:57 crc kubenswrapper[4809]: I0226 15:56:57.258621 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:56:57 crc kubenswrapper[4809]: E0226 15:56:57.261060 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:57:09 crc kubenswrapper[4809]: I0226 15:57:09.256755 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:57:09 crc kubenswrapper[4809]: E0226 15:57:09.258592 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:57:23 crc kubenswrapper[4809]: I0226 15:57:23.257577 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:57:23 crc kubenswrapper[4809]: E0226 15:57:23.258347 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:57:37 crc kubenswrapper[4809]: I0226 15:57:37.257557 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:57:37 crc kubenswrapper[4809]: E0226 15:57:37.258266 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:57:49 crc kubenswrapper[4809]: I0226 15:57:49.257957 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:57:49 crc kubenswrapper[4809]: E0226 15:57:49.258996 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.150863 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535358-2z6l9"] Feb 26 15:58:00 crc kubenswrapper[4809]: E0226 15:58:00.152022 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef6bdf1-69ed-48cd-b474-ec63bb566023" containerName="oc" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.152036 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef6bdf1-69ed-48cd-b474-ec63bb566023" containerName="oc" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.152348 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef6bdf1-69ed-48cd-b474-ec63bb566023" containerName="oc" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.153130 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.155850 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.156287 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.156870 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.163805 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535358-2z6l9"] Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.252098 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgh5\" (UniqueName: \"kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5\") pod \"auto-csr-approver-29535358-2z6l9\" (UID: \"786aba50-81ed-46cc-9481-54a92e648673\") " pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.354651 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgh5\" (UniqueName: \"kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5\") pod \"auto-csr-approver-29535358-2z6l9\" (UID: \"786aba50-81ed-46cc-9481-54a92e648673\") " pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.381636 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgh5\" (UniqueName: \"kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5\") pod \"auto-csr-approver-29535358-2z6l9\" (UID: \"786aba50-81ed-46cc-9481-54a92e648673\") " pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.472694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:00 crc kubenswrapper[4809]: I0226 15:58:00.968792 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535358-2z6l9"] Feb 26 15:58:01 crc kubenswrapper[4809]: I0226 15:58:01.355705 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" event={"ID":"786aba50-81ed-46cc-9481-54a92e648673","Type":"ContainerStarted","Data":"1c6e37f0c551afe42ca08166611e694b68a98e23815123fb8dee6266a9f3d135"} Feb 26 15:58:03 crc kubenswrapper[4809]: I0226 15:58:03.385453 4809 generic.go:334] "Generic (PLEG): container finished" podID="786aba50-81ed-46cc-9481-54a92e648673" containerID="82281e0fd65427d12fb667c18ff1ebcaeebefc9985d5bb1f8be6c9d343ace115" exitCode=0 Feb 26 15:58:03 crc kubenswrapper[4809]: I0226 15:58:03.385913 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" event={"ID":"786aba50-81ed-46cc-9481-54a92e648673","Type":"ContainerDied","Data":"82281e0fd65427d12fb667c18ff1ebcaeebefc9985d5bb1f8be6c9d343ace115"} Feb 26 15:58:04 crc kubenswrapper[4809]: I0226 15:58:04.256751 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:58:04 crc kubenswrapper[4809]: E0226 15:58:04.257113 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:58:04 crc kubenswrapper[4809]: I0226 15:58:04.895047 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.010991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgh5\" (UniqueName: \"kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5\") pod \"786aba50-81ed-46cc-9481-54a92e648673\" (UID: \"786aba50-81ed-46cc-9481-54a92e648673\") " Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.016471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5" (OuterVolumeSpecName: "kube-api-access-6hgh5") pod "786aba50-81ed-46cc-9481-54a92e648673" (UID: "786aba50-81ed-46cc-9481-54a92e648673"). InnerVolumeSpecName "kube-api-access-6hgh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.114209 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hgh5\" (UniqueName: \"kubernetes.io/projected/786aba50-81ed-46cc-9481-54a92e648673-kube-api-access-6hgh5\") on node \"crc\" DevicePath \"\"" Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.419114 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" event={"ID":"786aba50-81ed-46cc-9481-54a92e648673","Type":"ContainerDied","Data":"1c6e37f0c551afe42ca08166611e694b68a98e23815123fb8dee6266a9f3d135"} Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.419590 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6e37f0c551afe42ca08166611e694b68a98e23815123fb8dee6266a9f3d135" Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.419212 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535358-2z6l9" Feb 26 15:58:05 crc kubenswrapper[4809]: I0226 15:58:05.998770 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535352-8m7qs"] Feb 26 15:58:06 crc kubenswrapper[4809]: I0226 15:58:06.013355 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535352-8m7qs"] Feb 26 15:58:06 crc kubenswrapper[4809]: I0226 15:58:06.275075 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4323b07a-4910-458d-bf1a-7ec4fb6b40e0" path="/var/lib/kubelet/pods/4323b07a-4910-458d-bf1a-7ec4fb6b40e0/volumes" Feb 26 15:58:17 crc kubenswrapper[4809]: I0226 15:58:17.257533 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:58:17 crc kubenswrapper[4809]: E0226 15:58:17.258419 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:58:19 crc kubenswrapper[4809]: I0226 15:58:19.059661 4809 scope.go:117] "RemoveContainer" containerID="b26264436e2d3a53186e2a5c52d9c8ea5f7e04fa26dab5706269e8890cb68d1f" Feb 26 15:58:32 crc kubenswrapper[4809]: I0226 15:58:32.256945 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:58:32 crc kubenswrapper[4809]: E0226 15:58:32.258484 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:58:44 crc kubenswrapper[4809]: I0226 15:58:44.258151 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:58:44 crc kubenswrapper[4809]: E0226 15:58:44.258881 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:58:58 crc kubenswrapper[4809]: I0226 15:58:58.257422 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:58:58 crc kubenswrapper[4809]: E0226 15:58:58.258439 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:59:11 crc kubenswrapper[4809]: I0226 15:59:11.257139 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:59:11 crc kubenswrapper[4809]: E0226 15:59:11.258131 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:59:22 crc kubenswrapper[4809]: I0226 15:59:22.265907 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:59:22 crc kubenswrapper[4809]: E0226 15:59:22.266975 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:59:36 crc kubenswrapper[4809]: I0226 15:59:36.257922 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:59:36 crc kubenswrapper[4809]: E0226 15:59:36.259044 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 15:59:50 crc kubenswrapper[4809]: I0226 15:59:50.258367 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 15:59:50 crc kubenswrapper[4809]: E0226 15:59:50.259711 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.177011 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x"] Feb 26 16:00:00 crc kubenswrapper[4809]: E0226 16:00:00.178794 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786aba50-81ed-46cc-9481-54a92e648673" containerName="oc" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.178812 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="786aba50-81ed-46cc-9481-54a92e648673" containerName="oc" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.179206 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="786aba50-81ed-46cc-9481-54a92e648673" containerName="oc" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.180504 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.189103 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.193369 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.193902 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535360-z2k8w"] Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.195839 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.197339 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-psvsk" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.198951 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.199086 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.209242 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x"] Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.222652 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535360-z2k8w"] Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.300975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrl2\" (UniqueName: \"kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.301144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.301598 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5m7\" (UniqueName: \"kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7\") pod \"auto-csr-approver-29535360-z2k8w\" (UID: \"44f64260-86b8-4a3d-a83d-77b629e37788\") " pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.301775 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.404336 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.405717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrl2\" (UniqueName: \"kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.406210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.407550 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.408452 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5m7\" (UniqueName: \"kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7\") pod \"auto-csr-approver-29535360-z2k8w\" (UID: \"44f64260-86b8-4a3d-a83d-77b629e37788\") " pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.413031 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.427074 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrl2\" (UniqueName: \"kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2\") pod \"collect-profiles-29535360-7k67x\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.429231 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5m7\" (UniqueName: \"kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7\") pod \"auto-csr-approver-29535360-z2k8w\" (UID: \"44f64260-86b8-4a3d-a83d-77b629e37788\") " pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.505476 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:00 crc kubenswrapper[4809]: I0226 16:00:00.525189 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.053054 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535360-z2k8w"] Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.069334 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x"] Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.937337 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" event={"ID":"44f64260-86b8-4a3d-a83d-77b629e37788","Type":"ContainerStarted","Data":"a8c66d1d8787d203e8cac06074310c9f9df99f48632f7291754ad13b1f4f9836"} Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.942851 4809 generic.go:334] "Generic (PLEG): container finished" podID="af7c7700-428d-479c-bf60-783733bcb549" containerID="37e9b8d0ecf6dacc09fad0c4bdcabc686bcdd3a09e9c6b4e5b1e9c51e00bc91e" exitCode=0 Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.942887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" event={"ID":"af7c7700-428d-479c-bf60-783733bcb549","Type":"ContainerDied","Data":"37e9b8d0ecf6dacc09fad0c4bdcabc686bcdd3a09e9c6b4e5b1e9c51e00bc91e"} Feb 26 16:00:01 crc kubenswrapper[4809]: I0226 16:00:01.942911 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" event={"ID":"af7c7700-428d-479c-bf60-783733bcb549","Type":"ContainerStarted","Data":"05681cab62c8a74cb186fa32bb19c9127dc467d7f4a35cc3777fd9dd8ab9685b"} Feb 26 16:00:02 crc kubenswrapper[4809]: I0226 16:00:02.954678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" event={"ID":"44f64260-86b8-4a3d-a83d-77b629e37788","Type":"ContainerStarted","Data":"887afaafbb03a798b87e1b87d43f216af1f8099b8be6d7bdb2ff762f580ccf5b"} Feb 26 16:00:02 crc kubenswrapper[4809]: I0226 16:00:02.974688 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" podStartSLOduration=1.4129332479999999 podStartE2EDuration="2.974670059s" podCreationTimestamp="2026-02-26 16:00:00 +0000 UTC" firstStartedPulling="2026-02-26 16:00:01.059543813 +0000 UTC m=+6379.532864336" lastFinishedPulling="2026-02-26 16:00:02.621280624 +0000 UTC m=+6381.094601147" observedRunningTime="2026-02-26 16:00:02.970451761 +0000 UTC m=+6381.443772294" watchObservedRunningTime="2026-02-26 16:00:02.974670059 +0000 UTC m=+6381.447990582" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.256962 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 16:00:03 crc kubenswrapper[4809]: E0226 16:00:03.257377 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-72xsh_openshift-machine-config-operator(2ee5dfae-6391-4988-900c-e8abcb031d30)\"" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" podUID="2ee5dfae-6391-4988-900c-e8abcb031d30" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.512930 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.599683 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume\") pod \"af7c7700-428d-479c-bf60-783733bcb549\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.599732 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxrl2\" (UniqueName: \"kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2\") pod \"af7c7700-428d-479c-bf60-783733bcb549\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.599863 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume\") pod \"af7c7700-428d-479c-bf60-783733bcb549\" (UID: \"af7c7700-428d-479c-bf60-783733bcb549\") " Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.600647 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume" (OuterVolumeSpecName: "config-volume") pod "af7c7700-428d-479c-bf60-783733bcb549" (UID: "af7c7700-428d-479c-bf60-783733bcb549"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.607244 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "af7c7700-428d-479c-bf60-783733bcb549" (UID: "af7c7700-428d-479c-bf60-783733bcb549"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.607449 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2" (OuterVolumeSpecName: "kube-api-access-lxrl2") pod "af7c7700-428d-479c-bf60-783733bcb549" (UID: "af7c7700-428d-479c-bf60-783733bcb549"). InnerVolumeSpecName "kube-api-access-lxrl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.702490 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxrl2\" (UniqueName: \"kubernetes.io/projected/af7c7700-428d-479c-bf60-783733bcb549-kube-api-access-lxrl2\") on node \"crc\" DevicePath \"\"" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.702534 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7c7700-428d-479c-bf60-783733bcb549-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.702547 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af7c7700-428d-479c-bf60-783733bcb549-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.965375 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" event={"ID":"af7c7700-428d-479c-bf60-783733bcb549","Type":"ContainerDied","Data":"05681cab62c8a74cb186fa32bb19c9127dc467d7f4a35cc3777fd9dd8ab9685b"} Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.965429 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05681cab62c8a74cb186fa32bb19c9127dc467d7f4a35cc3777fd9dd8ab9685b" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.965408 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535360-7k67x" Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.966977 4809 generic.go:334] "Generic (PLEG): container finished" podID="44f64260-86b8-4a3d-a83d-77b629e37788" containerID="887afaafbb03a798b87e1b87d43f216af1f8099b8be6d7bdb2ff762f580ccf5b" exitCode=0 Feb 26 16:00:03 crc kubenswrapper[4809]: I0226 16:00:03.967035 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" event={"ID":"44f64260-86b8-4a3d-a83d-77b629e37788","Type":"ContainerDied","Data":"887afaafbb03a798b87e1b87d43f216af1f8099b8be6d7bdb2ff762f580ccf5b"} Feb 26 16:00:04 crc kubenswrapper[4809]: I0226 16:00:04.601599 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n"] Feb 26 16:00:04 crc kubenswrapper[4809]: I0226 16:00:04.611712 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535315-jfc7n"] Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.433210 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.547399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd5m7\" (UniqueName: \"kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7\") pod \"44f64260-86b8-4a3d-a83d-77b629e37788\" (UID: \"44f64260-86b8-4a3d-a83d-77b629e37788\") " Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.558201 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7" (OuterVolumeSpecName: "kube-api-access-cd5m7") pod "44f64260-86b8-4a3d-a83d-77b629e37788" (UID: "44f64260-86b8-4a3d-a83d-77b629e37788"). InnerVolumeSpecName "kube-api-access-cd5m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.651288 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd5m7\" (UniqueName: \"kubernetes.io/projected/44f64260-86b8-4a3d-a83d-77b629e37788-kube-api-access-cd5m7\") on node \"crc\" DevicePath \"\"" Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.993994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" event={"ID":"44f64260-86b8-4a3d-a83d-77b629e37788","Type":"ContainerDied","Data":"a8c66d1d8787d203e8cac06074310c9f9df99f48632f7291754ad13b1f4f9836"} Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.994048 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c66d1d8787d203e8cac06074310c9f9df99f48632f7291754ad13b1f4f9836" Feb 26 16:00:05 crc kubenswrapper[4809]: I0226 16:00:05.994102 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535360-z2k8w" Feb 26 16:00:06 crc kubenswrapper[4809]: I0226 16:00:06.282543 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823f865d-0e91-4975-8c47-bc6a61a1a027" path="/var/lib/kubelet/pods/823f865d-0e91-4975-8c47-bc6a61a1a027/volumes" Feb 26 16:00:06 crc kubenswrapper[4809]: I0226 16:00:06.507890 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535354-vm8jj"] Feb 26 16:00:06 crc kubenswrapper[4809]: I0226 16:00:06.523850 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535354-vm8jj"] Feb 26 16:00:08 crc kubenswrapper[4809]: I0226 16:00:08.282187 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1240cef5-a93a-4936-87e4-2eaf4c96476b" path="/var/lib/kubelet/pods/1240cef5-a93a-4936-87e4-2eaf4c96476b/volumes" Feb 26 16:00:18 crc kubenswrapper[4809]: I0226 16:00:18.257513 4809 scope.go:117] "RemoveContainer" containerID="799de89df3d95cbb3b37bed41b3caccb17fccec780585502904dcb36d1ae334f" Feb 26 16:00:19 crc kubenswrapper[4809]: I0226 16:00:19.154637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-72xsh" event={"ID":"2ee5dfae-6391-4988-900c-e8abcb031d30","Type":"ContainerStarted","Data":"4cc5caa5f5923df0d2e235bf4194b035bd781dc93858f9f4b81e913751d1fd18"} Feb 26 16:00:19 crc kubenswrapper[4809]: I0226 16:00:19.191096 4809 scope.go:117] "RemoveContainer" containerID="ff397b7f04e4d9d07665e62d56faf289a0939a05e06f3bbe03185757bbfb93c6" Feb 26 16:00:19 crc kubenswrapper[4809]: I0226 16:00:19.251548 4809 scope.go:117] "RemoveContainer" containerID="bdef9860011a61a194b20c81596b3006aeceb3e3c7fca8f752afd578bc95e402"